artificial intelligence
Page: 21
A song featuring AI-generated fake vocals from Drake and The Weeknd might be a scary moment for artists and labels whose livelihoods feel threatened, but does it violate the law? It’s a complicated question.
The song “Heart on My Sleeve,” which also featured Metro Boomin’s distinctive producer tag, racked up hundreds of thousands of spins on streaming services before it was pulled down on Monday evening, powered to viral status by uncannily similar vocals over a catchy instrumental track. Millions more have viewed shorter snippets of the song that the anonymous creator posted to TikTok.
It’s unclear whether only the soundalike vocals were created with AI tools – a common trick used for years in internet parody videos and deepfakes – or if the entire song was created solely by a machine based purely on a prompt to create a Drake track, a more novel and potentially disruptive development.
For an industry already on edge about the sudden growth of artificial intelligence, the appearance of a song that convincingly replicated the work product of two of music’s biggest stars and one of its top producers and won over likely millions of listeners has set off serious alarm bells.
“The ability to create a new work this realistic and specific is disconcerting, and could pose a range of threats and challenges to rightsowners, musicians, and the businesses that invest in them,” says Jonathan Faber, the founder of Luminary Group and an attorney who specializes in protecting the likeness rights of famous individuals. “I say that without attempting to get into even thornier problems, which likely also exist as this technology demonstrates what it may be capable of.”
“Heart On My Sleeve” was quickly pulled down, disappearing from most streaming services by Monday evening. Representatives for Drake, The Weeknd and Spotify all declined to comment when asked about the song on Monday. And while the artists’ label, Universal Music Group, issued a strongly worded statement condemning “infringing content created with generative AI,” a spokesperson would not say whether the company had sent formal takedown requests over the song.
A rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was removed because it used a copyrighted music sample.
Highlighted by the debacle is a monumental legal question for the music industry that will likely be at the center of legal battles for years to come: To what extent do AI-generated songs violate the law? Though “Heart on My Sleeve” was removed relatively quickly, it’s a more complicated question than it might seem.
For starters, the song appears to be an original composition that doesn’t directly copy any of Drake or the Weeknd’s songs, meaning that it could be hard to make a claim that it infringes their copyrights, like when an artist uses elements of someone else’s song without permission. While Metro Boomin’s tag may have been illegally sampled, that element likely won’t exist in future fake songs.
By mimicking their voices, however, the track represents a clearer potential violation of Drake and Weeknd’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Such rights are more typically invoked when someone’s name or visual likeness is stolen, but they can extend to someone’s voice if it’s particularly well-known – think Morgan Freeman or James Earl Jones.
“The right of publicity provides recourse for rights owners who would otherwise be very vulnerable to technology like this,” Faber said. “It fits here because a song is convincingly identifiable as Drake and the Weeknd.”
Whether a right of publicity lawsuit is legally viable against this kind of voice mimicry might be tested in court soon, albeit in a case dealing with decidedly more old school tech.
Back in January, Rick Astley sued Yung Gravy over the rapper’s breakout 2022 hit that heavily borrowed from the singer’s iconic “Never Gonna Give You Up.” While Yung Gravy had licensed the underlying composition, Astley claimed Yung Gravy violated his right of publicity when he hired a singer who mimicked his distinctive voice.
That case has key differences from the situation with “Heart on My Sleeve,” like the allegation that Gravy falsely suggested to his listeners that Astley had actually endorsed his song. In the case of “Heart on My Sleeve,” the anonymous creator Ghostwriter omitted any reference to Drake and The Weeknd on streaming platforms; on TikTok, he directly stated that he, and not the two superstars, had created his song using AI.
But for Richard Busch of the law firm King & Ballow, a veteran music industry litigator who brought the lawsuit on behalf of Astley, the right of publicity and its protections for likeness still provides the most useful tool for artists and labels confronted with such a scenario in the future.
“If you are creating a song that sounds identical to, let’s say, Rihanna, regardless of what you say people are going to believe that it was Rihanna. I think there’s no way to get around that,” Busch said. “The strongest claim here would be the use of likeness.”
But do AI companies themselves break the law when they create programs that can so effectively mimic Drake and The Weeknd’s voices? That would seem to be the far larger looming crisis, and one without the same kind of relatively clear legal answers.
The fight ahead will likely be over how AI platforms are “trained” – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. From the point of view of many in the music industry, if that process is accomplished by feeding a platform copyrighted songs — in this case, presumably, recordings by Drake and The Weeknd — then those platforms and their owners are infringing copyrights on a mass scale.
In UMG’s statement Monday, the label said clearly that it believes such training to be a “violation of copyright law,” and the company previously warned that it “will not hesitate to take steps to protect our rights and those of our artists.” The RIAA has said the same, blasting AI companies for making “unauthorized copies of our members works” to train their machines.
While the training issue is legally novel and unresolved, it could be answered in court soon. A group of visual artists has filed a class action over the use of their copyrighted images to train AI platforms, and Getty Images has filed a similar case against AI companies that allegedly “scraped” its database for training materials.
And after this week’s incident over “Heart on My Sleeve,” a similar lawsuit against AI platforms filed by artists or music companies gets more likely by the day.
National Association of Broadcasters president and CEO Curtis LeGeyt spoke out on the potential dangers of Artificial Intelligence on Monday at the NAB Show in Las Vegas. “This is an area where NAB will absolutely be active,” he asserted of AI, which is one of the buzziest topics this week at the annual convention. “It is just amazing how quickly the relevance of AI to our entire economy — but specifically, since we’re in this room, the broadcast industry — has gone from amorphous concept to real.”
LeGeyt warned of several concerns that he has for local broadcasters, the first being issues surrounding “big tech” taking broadcast content and not fairly compensating broadcasters for its use. “We have been fighting for legislation to put some guardrails on it,” LeGeyt said. “AI has the potential to put that on overdrive. We need to ensure that our stations, our content creators are going to be fairly compensated.”
He added that he worries for journalists. “We’re already under attack for any slip-up we might have with regard to misreporting on a story. Well, you’re gonna have to do a heck of a lot more diligence to ensure that whatever you are reporting on is real, fact-based information and not just some AI bot that happens to look like Joe Biden.” Finally, he warned of images and likenesses being misappropriated where AI is involved.
“I want to wave the caution flag on some of these areas,” he said. “I think this could be really damaging for local broadcast.”
During his talk, he also outlines was he sees as potential opportunities. “My own view is there are some real potentially hyperlocal benefits to AI,” he said, citing as examples translation services and the ability to speed up research at “resource-constrained local stations.” He asserted, “Investigative journalism is never going to be replaced by AI. Our role at local community events, philanthropic work, is never going to be replaced by AI. But to the degree that we can leverage AI to do some of the things that are time-consuming and take away your ability to be boots on the ground doing the things that only you can do well, I think that’s a positive.”
Also addressed during the session was the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which may include capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a lot of moving parts and has a long way to go before its potential can be realized.
At NAB, FCC chairwoman Jessica Rosenworcel was on hand to announce the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0. “With over 60 percent of Americans already in range of a Next Gen TV signal, we are excited to work closely with all stakeholders, including the FCC, to bring Next Gen TV and all of its benefits to all viewers,” said LeGeyt.
During his session, LeGeyt also addressed “fierce competition for the dashboard” as part of a discussion of connected cars. “It’s not enough for any one [broadcaster] to innovate. If we are all not rowing in the same direction as an industry, … we are going to lose this arms race,” he warned.
Citing competition from the likes of Spotify, he contends that the local content offered by broadcasters gives them a “competitive advantage.”
The NAB Show runs through Wednesday.
This article was originally published by The Hollywood Reporter.
A new song believed to feature AI-generated fake vocals from Drake and The Weeknd that went viral over the weekend has been pulled from most streaming platforms after their label, Universal Music Group, released a statement Monday (April 17) condemning “infringing content created with generative AI.”
Released by an anonymous TikTok user called Ghostwriter977 and credited as Ghostwriter on steaming platforms where it racked up hundreds of thousands of streams, the track “Heart On My Sleeve” features uncannily similar voices to the two superstars — a trick that the creator says was accomplished by using artificial intelligence. It’s unclear if the entire song was created with AI, or just the soundalike vocals.
By Monday afternoon, the song had generated more 600,000 spins on Spotify, and Ghostwriter977’s TikTok videos had been viewed more than 15 million times. A YouTube video had another 275,000 views, with an ominous comment from the creator below it: “This is just the beginning.”
Many music fans seemed impressed. One comment on TikTok with more than 75,000 likes said it was the “first AI song that has actually impressed me.” Another said Ghostwriter was “putting out better drake songs than drake himself.” A third said AI was “getting dangerously good.”
But the end could already be in sight. At time of publishing on Monday evening, “Heart On My Sleeve” had recently been pulled from Spotify, as well as Apple Music, Deezer and TIDAL before it.
Even if short-lived, the sensational success of “Heart On My Sleeve” will no doubt underscore growing concerns over the impact of AI on the music industry. Last week, UMG urged streaming platforms like Spotify to block AI companies from accessing the label’s songs to “train” their machines, and the RIAA has warned that doing so infringes copyrights on a mass scale. Last month, a large coalition of industry organizations warned that AI technology should not be used to “replace or erode” human artistry.
Representatives for Drake and The Weeknd declined to comment on Monday. But in a statement to Billboard, UMG said the viral postings “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” a UMG spokesman said in a statement. “We’re encouraged by the engagement of our platform partners on these issues – as they recognize they need to be part of the solution.”
UMG declined to comment on whether it had sent formal takedown requests to streaming services and social media websites.
Drake is in his feelings.
On Friday (April 14), the chart-topping artist took to Instagram to voice his opinion about AI-generated versions of his voice, particularly a video that features him rapping Bronx artist Ice Spice’s “Munch.”
“This is the last straw,” he wrote on his story, along with a post about the AI clip. The pairing of Drake with Ice Spice is particularly interesting, given the rappers’ history. While Drake was an early advocate of Ice Spice, born Isis Gaston, he unfollowed her on Instagram, something Gaston had no explanation for in interviews. However, shortly after, he re-followed her.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
Drake’s complaint comes after Universal Music Group asked streaming services including Spotify and Apple Music to prevent artificial intelligence companies from accessing their copyrighted songs. AI companies would use the music to “train” their machines, something that is becoming a cause for concern within the music industry.
In an email sent to Spotify, Apple Music and other streaming platforms, UMG said that it had become aware that certain AI services had been trained on copyrighted music “without obtaining the required consents” from those who own the songs.
“We will not hesitate to take steps to protect our rights and those of our artists,” UMG warned in the email, first obtained by the Financial Times. Billboard confirmed the details with sources on both sides. Although there isn’t clarity on what those steps would be or what streaming platforms can do to stop it, labels and artists alike seem aligned about a needed change.
UMG later issued a statement regarding the email sent to DSPs. “We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators. We expect our platform partners will want to prevent their services from being used in ways that harm artists,” it read.
Other AI covers making the rounds include Rihanna singing Beyoncé’s “Cuff It,” which sounded relatively believable, aside from a glitch during a melodic run.
While the implications of artificial intelligence poking its head into music can be scary for artists and labels alike, it’s hard not to smirk at Drizzy rapping, “A– too fat, can’t fit in no jeans.”
President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.
Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.
“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.
Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it’s the work of humans.
The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.
Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.
The U.S. so far has taken a different approach. The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.
The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.
Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.
Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”
In a new open letter signed by Elon Musk, Steve Wozniak, Andrew Yang and more on Wednesday (March 29), leaders in technology, academia and politics came together to call for a moratorium on training AI systems “more advanced than Chat GPT-4” for “at least 6 months.”
The letter states that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” including the increased spread of propaganda and fake news as well as automation leading to widespread job loss. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.
By drawing the line at AI models “more advanced than Chat GPT-4,” the signees are likely pointing to generative artificial intelligence — a term encompassing a subset of AI that can create new content after being trained via the input of millions or even billions of pieces of data. While some companies license or create their own training data, a large number of AIs are trained using data sets scraped from the web that contain copyright-protected material, including songs, books, articles, images and more. This practice has sparked widespread debate over whether or not AI companies should be required to obtain consent or to compensate the rights holders, and whether the fast-evolving models will endanger the livelihoods of musicians, illustrators and other creatives.
Before late 2022, generative AI was little discussed outside of tech-savvy circles, but it has gained national attention over the last six months. Popular examples of generative AI today include image generators like DALLE-2, Stable Diffusion and Midjourney, which use simple text prompts to conjure up realistic pictures. Chatbots (also called Large Language Models or “LLMs”) like Chat GPT are also considered generative, as are machines that can create new music at the touch of a button. Though generative AI models in music have yet to make as many headlines as chatbots and image generators, companies like Boomy, Soundful, Beatlab, Google’s Magenta, Open AI and others are already building them, leading to fears that their output could one day threaten human-made music.
The letter urging the pause in AI training was signed by some of AI’s biggest executives. They notably include Stability AI CEO Emad Mostaque, Conjecture AI CEO Connor Leahy, Unanimous AI CEO and chief scientist Louis Rosenberg and Scale AI CEO Julien Billot. It was also signed by Pinterest co-founder Evan Sharp, Skype co-founder Jaan Tallinn and Ripple CEO Chris Larsen.
Other signees include several engineers and researchers at Microsoft, Google and Meta, though it notably does not include any names from Open AI, the firm behind the creation of Chat GPT-4.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter continues. Rather, the industry must “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
The letter comes only a few weeks after several major organizations in the entertainment industry, including in music, came together to release a list of seven principles, detailing how they hope to protect and support “human creativity” in the wake of the AI boom. “Policymakers must consider the interests of human creators when crafting policy around AI,” the coalition wrote. “Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table.”
Generative AI is hot right now. Over the last several years, music artists and labels have opened up to the idea of AI as an exciting new tool. Yet when Dall-E 2, Midjourney and GPT-3 opened up to the public, the fear that AI would render artists obsolete came roaring back.
I am here from the world of generative AI with a message: We come in peace. And music and AI can work together to address one of society’s ongoing crises: mental wellness.
While AI can already create visual art and text that are quite convincing versions of their human-made originals, it’s not quite there for music. AI music might be fine for soundtracking UGC videos and ads. But clearly we can do much better with AI and music.
There’s one music category where AI can help solve actual problems and open new revenue streams for everyone, from music labels, to artists, to DSPs. It’s the functional sound market. Largely overlooked and very lucrative, the functional sound market has been steadily growing over the past 10 years, as a societal need for music to heal increases across the globe.
Sound is powerful. It’s the easiest way to control your environment. Sound can change your mood, trigger a memory, or lull you to sleep. It can make you buy more or make you run in terror (think about the music played in stores intentionally to facilitate purchasing behavior or the sound of alarms and sirens). Every day, hundreds of millions of people are self-medicating with sound. If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 “functional” playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.
This is the market UMG chief Sir Lucian Grainge singled out in his annual staff memo earlier this year. He’s not wrong: DSPs are swarmed with playlists consisting of dishwasher sounds and white noise, which divert revenue and attention from music artists. Functional sound is a vast ocean of content with no clear leader or even a clear product.
The nuance here is that the way people consume functional sound is fundamentally different from the way they consume traditional music. When someone tunes into a sleep playlist, they care first and foremost if it works. They want it to help them fall asleep, as fast as possible. It’s counterintuitive to listen to your favorite artist when you’re trying to go to sleep (or focus, study, read, meditate). Most artist-driven music is not scientifically engineered to put you into a desired cognitive state. It’s designed to hold your attention or express some emotion or truth the artist holds dear. That’s why ambient music — which, as Brian Eno put it, is as ignorable as it is interesting — had its renaissance moment a few years ago, arguably propelled by the mental health crisis.
How can AI help music artists and labels win back market share from white noise and dishwasher sounds playlists? Imagine that your favorite music exists in two forms: the songs and albums that you know and love, and a functional soundscape version that you can sleep, focus, or relax to. The soundscape version is produced by feeding the source stems from the album or song into a neuroscience-informed Generative AI engine. The stems are processed, multiplied, spliced together and overlaid with FX, birthing a functional soundscape built from the DNA of your favorite music. This is when consumers finally have a choice: fall asleep or study/read/focus to a no-name white-noise playlist, or do it with a scientifically engineered functional soundscape version of their favorite music.
This is how Generative AI can create new revenue streams for all agents of the music industry, today: music labels win a piece of the the market with differentiated functional content built from their catalog; artists expand their music universe, connect with their audience in new and meaningful ways, and extend the shelf life to their material; DSPs get ample, quality-controlled content that increases engagement. Once listeners find sounds that achieve their goals, they often stick with them. For example, Wind Down, James Blake’s sleep soundscape album, shows a 50% listener retention in its seventh month after release. This shows that, when done right, functional sound has an incredibly long shelf life.
This win-win-win future is already here. By combining art, generative AI technology and science, plus business structures that enable such deals, we can transform amazing artist-driven sounds into healing soundscapes that listeners crave. In an age that yearns for calm, clarity, and better mental health, we can utilize AI to create new music formats that rights holders can embrace and listeners can appreciate. It promises AI-powered music that not only sounds good, but improves people’s lives, and supports artists. This is how you ride the functional music wave and create something listeners will find real value in and keep coming back to. Do not be afraid. Work with us. Embrace the future.
Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.
To clear up questions about the copyrightability of AI in music, the U.S. Copyright Office (USCO) recently signaled that copyrighting songs is about to get a lot more complicated.
Last week the USCO released guidance on the copyright-ability of works made using AI, saying that a work that is a combination of both AI generation and human creation can be eligible for copyright protection, with any purely AI made portions carved out. Essentially, it takes the position that copyright only extends to the portions of the work that are attributable to human authorship.
This sounds logical however often such clear boundaries do not exist in music. The USCO acknowledges this by leaving space for copyrighting AI-generated content if it gave form to an author’s “original mental conception,” as opposed to being a purely “mechanical reproduction.”
Giving form to an idea is something songwriters are familiar with. Whether for writer’s block, inspiration, or organization, many if not most current creators use some form of AI tools to a certain extent, and how that informs their process often is not clearly defined.
To address this, the policy caveat is that the copyrightability of any given work will depend on its specific circumstances and will need to be determined on a case-by-case basis. It’s worth noting copyright does not protect ideas, only expression, and these distinctions will no doubt be complex when addressed in practice. Specifically, it states,
“This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘‘actually formed’’ the traditional elements of authorship.”
The USCO has been engaging with the relevant parties on this topic for some time, and there is great pressure to chart the path on AI as platforms become increasing advanced. Across the art world, AI is already pushing boundaries.
This most recent policy guidance also likely was prompted by a pending lawsuit on the question of whether any human authorship is required for copyrightability. The case was brought against the Copyright Office by an AI developer whose registration for a visual work of art was rejected since he listed AI as the author.
The lawsuit argues that the Copyright Act does not require human authorship. While it is true that the Copyright Act does not explicitly include the word human authorship and instead refers to “original works of authorship,” the Copyright Office’s decision not to grant the copyright is bolstered by decades of caselaw that interpret “author” to mean “human.” A few years ago a selfie taken by a monkey was deemed ineligible for copyright protection on the basis that the monkey was not a human author.
The USCO has authority to prescribe application requirements and to “establish regulations not inconsistent with law for the administration of the functions and duties made the responsibility of the Register.” (17 U.S.C. 702). However, the Copyright Office will be subject to the courts ruling on this case.
As far as the current rule limiting copyrightability to human expression goes, the exact amount of human involvement necessary to merit copyright protection in a work created using AI remains to be seen. This untested line raises significant questions for the music industry and the foreseeable future of AI-assisted songwriting.
The primary example we have from the Copyright Office is fairly straightforward, however it is not a song. An author submitted an application for registration of a comic book where the text was written by the human author but the images were generated by AI, through a tool called Midjourney.
The Copyright Office determined that while the work was copyrightable, the copyright only extended to the human-authored text, and to the human authorship involved in the selection and arrangement of the images but did not extend to the AI-generated images themselves.
Clearly a comic book allows for easy differentiation between images and text. That may be analogous to, for example, a melody created purely by AI combined with lyrics created purely by a human or vice versa. In cases like this, foreseeable questions would arise around remixing and sampling—is it fair game to remix and sample portions of a song that were created by AI and excluded from copyright protection?
While it’s easier to discern how the Copyright Office will rule on some hypotheticals, it’s extremely unclear how these lines will be drawn when the human and AI contributions are more intertwined.
AI is often used as a collaborative partner in the creative process. For example, a human songwriter might use an AI tool to generate a midi file containing a few bars of melody, or a text-generator to suggest few stanzas of lyrics, and then substantially edit and revise the AI-generated content and combine it with entirely original lines and melodies from their own imagination.
In that situation, it is unclear how the Copyright Office would begin to distinguish between the human authorship and AI authorship involved. At what point, if any, of editing and changing lyrics generated by AI would they become lyrics generated by a person? What determines significant enough change to be considered original? How will registrars investigate these questions when reviewing a copyright application? The USCO advises,
“applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work. As contemplated by the Copyright Act, such disclosures are ‘‘information regarded by the Register of Copyrights as bearing upon the preparation or identification of the work or the existence, ownership, or duration of the copyright.’”
It’s clear how copyright registration could immediately become more complicated and time consuming with these new considerations. One must question whether the USCO has the manpower and resources to take on what is in some ways an entirely new evaluation process for any registrations involving AI.
And aside from registration, these big questions will shape future licensing practices—is a license for a work that is only partially copyrightable worth the same as a license for a fully copyrighted work? What about a work that doesn’t have enough human contribution and doesn’t receive copyright protection—is it free to use, or stream? How will this affect royalty administration?
Beyond the ability to differentiate what is AI and what is human created, there are even larger questions looming around this space. AI works by continually ingesting, or copying, works across the Internet to “teach” its platform to create. To what extent does ingestion need to be generally licensed?
Whether they like it or not, the work of human creators is essentially “training” the computer programs trying to replace them, or some would argue, assist them. AI will continue to be integrated into the creative process, and in an era where the value of human-created music continues to be challenged, it is crucial that the music industry decides how to approach these issues in a way that ultimately ensures the long-term value and quality of human-made songs. After all, there would be no AI generated music without them.
David Israelite is the President & CEO of the National Music Publishers’ Association (NMPA). NMPA is the trade association representing American music publishers and their songwriting partners.
A wide coalition of music industry organizations have joined together to release a series of core principles regarding artificial intelligence — the first collective stance the entertainment business has taken surrounding the topic. Announced during the panel “Welcome to the Machine: Art in the Age of A.I.” held on Thursday (March 16) at South by Southwest (SXSW) and moderated by Billboard deputy editorial director Robert Levine, the principles reveal a growing sense of urgency by entertainment industry leaders to address the quickly-evolving issue.
“Over the past few months, I think [generative artificial intelligence] has gone from a ‘someday’ issue to a today issue,” said Levine. “It’s coming much quicker than anyone thought.”
In response to the fast-approaching collision of generative AI and the entertainment business, the principles detail the need for using the new technology to “empower human expression” while also asserting the importance of representing “creators’ interests…in policymaking” regarding the technology. Principles geared toward the latter include ensuring that AI developers acquire licenses for artistic works used in the “development and training of AI models” — and keep records of which works are used — and that governments refrain from creating “copyright or other IP exemptions” for the technology.
Among the 40 different groups that have joined the coalition — dubbed the Human Artistry Campaign — are music industry leaders including the Recording Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), SoundExchange, ASCAP, BMI and more.
Read the full list of principles below and get more information, including the full list of groups involved in the effort, here.
Core Principles for Artificial Intelligence Applications in Support of Human Creativity and Accomplishments:
Technology has long empowered human expression, and AI will be no different.
For generations, various technologies have been used successfully to support human creativity. Take music, for example… From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively.
Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.
Human-created works will continue to play an essential role in our lives.
Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.
Use of copyrighted works, and use of the voices and likenesses of professional performers, requires authorization, licensing, and compliance with all relevant state and federal laws.
We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes’ voices and likenesses must only be used with their consent and fair market compensation for specific uses.
Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.
AI must not receive exemptions from copyright law or other intellectual property laws and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.
Copyright should only protect the unique value of human intellectual creativity.
Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment -not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.
Trustworthiness and transparency are essential to the success of AI and protection of creators.
Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and methodology used to create it — informing consumer choices, and protecting creators and rightsholders.
Creators’ interests must be represented in policymaking.
Policymakers must consider the interests of human creators when crafting policy around AI. Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.
A new policy report from the U.S. Copyright Office says that songs and other artistic works created with the assistance of artificial intelligence can sometimes be eligible for copyright registration, but only if the ultimate author remains a human being.
The report, released by the federal agency on Wednesday (March 15), comes amid growing interest in the future role that could be played in the creation of music by so-called generative AI tools — similar to the much-discussed ChatGPT.
Copyright protection is strictly limited to content created by humans, leading to heated debate over the status of AI-generated works. In a closely-watched case last month, the Copyright Office decided that a graphic novel featuring AI-generated images was eligible for protection, but that the individual images couldn’t be protected.
In Wednesday’s report, the agency said that the use of AI tools was not an automatic ban on copyright registration, but that it would be closely scrutinized and could not play a dominant role in the creative process.
“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” the agency wrote. “For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the traditional elements of authorship are determined and executed by the technology — not the human user.”
The report listed examples of AI-aided works that might still be worthy of protection, like one that creatively combined AI-generated elements into something new, or a work that was AI-generated that an artist then heavily modified after the fact. And it stressed that other technological tools were still fair game.
“A visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording,” the report said. “In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.”
Under the rules laid out in the report, the Copyright Office said that anyone submitting such works must disclose which elements were created by AI and which were created by a human. The agency said that any AI-inclusive work that was previously registered without such a disclosure must be updated — and that failure to do so could result in the cancellation of the copyright registration.
Though aimed at providing guidance, Wednesday’s report avoided hard-and-fast rules. It stressed that analyzing copyright protection for AI-assisted works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances, including “how the AI tool operates” and “how it was used to create the final work.”
And the report didn’t even touch on a potentially thornier legal question: whether the creators of AI platforms infringe the copyrights of the vast number of earlier works that are used to “train” the platforms to spit out new works. In October, the Recording Industry Association of America (RIAA) warned that such providers were violating copyrights en masse by using existing music to train their machines.
“To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members works,” the RIAA said at the time.
Though Wednesday’s report did not offer guidance on that question, the Copyright Office said it had plans to weigh in soon.
“[The Office] has launched an agency-wide initiative to delve into a wide range of these issues,” the agency wrote. “Among other things, the Office intends to publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.”
Read the entire report here: