AI
Page: 11
When Universal Music Group emailed Spotify, Apple Music and other streaming services in March asking them to stop artificial-intelligence companies from using its labels’ recordings to train their machine-learning software, it fired the first Howitzer shell of what’s shaping up as the next conflict between creators and computers. As Warner Music Group, HYBE, ByteDance, Spotify and other industry giants invest in AI development, along with a plethora of small startups, artists and songwriters are clamoring for protection against developers that use music created by professionals to train AI algorithms. Developers, meanwhile, are looking for safe havens where they can continue their work unfettered by government interference.
To someday generate music that rivals the work of human creators, AI models use a process of machine-learning to identify patterns in and mimic the characteristics that make a song irresistible, like that sticky verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop or that meteoric drop that defines electronic dance. These are distinctions human musicians have to learn during their lives either through osmosis or music education.
Machine-learning is exponentially faster, though; it’s usually achieved by feeding millions, even billions of so-called “inputs” into an AI model to build its musical vocabulary. Due to the sheer scale of data needed to train current systems that almost always includes the work of professionals, and to many copyright owners’ dismay, almost no one asks their permission to use it.
Countries around the world have various ways of regulating what’s allowed when it comes to what’s called the text and data mining of copyrighted material for AI training. And some territories are concluding that fewer rules will lead to more business.
China, Israel, Japan, South Korea and Singapore are among the countries that have largely positioned themselves as safe havens for AI companies in terms of industry-friendly regulation. In January, Israel’s Ministry of Justice defined its stance on the issue, saying that “lifting the copyright uncertainties that surround this issue [of training AI generators] can spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine-learning] and content creation.”
Singapore also “certainly strives to be a hub for AI,” says Bryan Tan, attorney and partner at Reed Smith, which has an office there. “It’s one of the most permissive places. But having said that, I think the world changes very quickly,” Tan says. He adds that even in countries where exceptions in copyright for text and data mining are established, there is a chance that developments in the fast-evolving AI sector could lead to change.
In the United States, Amir Ghavi, a partner at Fried Frank who is representing open-source text-to-image developer Stability AI in a number of upcoming landmark cases, says that though the United States has a “strong tradition of fair use … this is all playing out in real time” with decisions in upcoming cases like his setting significant precedents for AI and copyright law.
Many rights owners, including musicians like Helienne Lindevall, president of the European Composers and Songwriters Alliance, are hoping to establish consent as a basic practice. But, she asks, “How do you know when AI has used your work?”
AI companies tend to keep their training process secret, but Mat Dryhurst, a musician, podcast host and co-founder of music technology company Spawning, says many rely on just a few data sets, such as Laion 5B (as in 5 billion data points) and Common Crawl, a web-scraping tool used by Google. To help establish a compromise between copyright owners and AI developers, Spawning has created a website called HaveIBeenTrained.com, which helps creators determine whether their work is found in these common data sets and, free of charge, opt out of being used as fodder for training.
These requests are not backed by law, although Dryhurst says, “We think it’s in every AI organization’s best interest to respect our active opt-outs. One, because this is the right thing to do, and two, because the legality of this varies territory to territory. This is safer legally for AI companies, and we don’t charge them to partner with us. We do the work for them.”
The concept of opting out was first popularized by the European Union’s Copyright Directive, passed in 2019. Though Sophie Goossens, a partner at Reed Smith who works in Paris and London on entertainment, media and technology law, says the definition of “opt out” was initially vague, its inclusion makes the EU one of the most strict in terms of AI training.
There is a fear, however, that passing strict AI copyright regulations could result in a country missing the opportunity to establish itself as a next-generation Silicon Valley and reap the economic benefits that would follow. Russian President Vladimir Putin believes the stakes are even higher. In 2017, he stated that the nation that leads in AI “will be the ruler of the world.” The United Kingdom’s Intellectual Property Office seemed to be moving in that direction when it published a statement last summer recommending that text and data mining be exempt from opt-outs in hopes of becoming Europe’s haven for AI. In February, however, the British government put the brakes on the IPO’s proposal, leaving its future uncertain.
Lindevall and others in the music industry say they are fighting for even better standards. “We don’t want to opt out, we want to opt in,” she says. “Then we want a clear structure for remuneration.”
The lion’s share of U.S.-based music and entertainment organizations — more than 40, including ASCAP, BMI, RIAA, SESAC and the National Music Publisher’s Association — are in agreement and recently launched the Human Artistry Campaign, which established seven principles advocating AI’s best practices intended to protect creators’ copyrights. No. 4: “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”
Today, the idea that rights holders could one day license works for machine-learning still seems far off. Among the potential solutions for remuneration are blanket licenses something like the blank-tape levies used in parts of Europe. But given the patchwork of international law on this subject, and the complexities of tracking down and paying rights holders, some feel these fixes are not viable.
Dryhurst says he and the Spawning team are working on a concrete solution: an “opt in” tool. Stability AI has signed on as its first partner for this innovation, and Dryhurst says the newest version of its text-to-image AI software, Stable Diffusion 3, will not include any of the 78 million artworks that opted out prior to this advancement. “This is a win,” he says. “I am really hopeful others will follow suit.”
A song featuring AI-generated fake vocals from Drake and The Weeknd might be a scary moment for artists and labels whose livelihoods feel threatened, but does it violate the law? It’s a complicated question.
The song “Heart on My Sleeve,” which also featured Metro Boomin’s distinctive producer tag, racked up hundreds of thousands of spins on streaming services before it was pulled down on Monday evening, powered to viral status by uncannily similar vocals over a catchy instrumental track. Millions more have viewed shorter snippets of the song that the anonymous creator posted to TikTok.
It’s unclear whether only the soundalike vocals were created with AI tools – a common trick used for years in internet parody videos and deepfakes – or if the entire song was created solely by a machine based purely on a prompt to create a Drake track, a more novel and potentially disruptive development.
For an industry already on edge about the sudden growth of artificial intelligence, the appearance of a song that convincingly replicated the work product of two of music’s biggest stars and one of its top producers and won over likely millions of listeners has set off serious alarm bells.
“The ability to create a new work this realistic and specific is disconcerting, and could pose a range of threats and challenges to rightsowners, musicians, and the businesses that invest in them,” says Jonathan Faber, the founder of Luminary Group and an attorney who specializes in protecting the likeness rights of famous individuals. “I say that without attempting to get into even thornier problems, which likely also exist as this technology demonstrates what it may be capable of.”
“Heart On My Sleeve” was quickly pulled down, disappearing from most streaming services by Monday evening. Representatives for Drake, The Weeknd and Spotify all declined to comment when asked about the song on Monday. And while the artists’ label, Universal Music Group, issued a strongly worded statement condemning “infringing content created with generative AI,” a spokesperson would not say whether the company had sent formal takedown requests over the song.
A rep for YouTube said on Tuesday that the platform “removed the video in question after receiving a valid takedown notice,” noting that the track was removed because it used a copyrighted music sample.
Highlighted by the debacle is a monumental legal question for the music industry that will likely be at the center of legal battles for years to come: To what extent do AI-generated songs violate the law? Though “Heart on My Sleeve” was removed relatively quickly, it’s a more complicated question than it might seem.
For starters, the song appears to be an original composition that doesn’t directly copy any of Drake or the Weeknd’s songs, meaning that it could be hard to make a claim that it infringes their copyrights, like when an artist uses elements of someone else’s song without permission. While Metro Boomin’s tag may have been illegally sampled, that element likely won’t exist in future fake songs.
By mimicking their voices, however, the track represents a clearer potential violation of Drake and Weeknd’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Such rights are more typically invoked when someone’s name or visual likeness is stolen, but they can extend to someone’s voice if it’s particularly well-known – think Morgan Freeman or James Earl Jones.
“The right of publicity provides recourse for rights owners who would otherwise be very vulnerable to technology like this,” Faber said. “It fits here because a song is convincingly identifiable as Drake and the Weeknd.”
Whether a right of publicity lawsuit is legally viable against this kind of voice mimicry might be tested in court soon, albeit in a case dealing with decidedly more old school tech.
Back in January, Rick Astley sued Yung Gravy over the rapper’s breakout 2022 hit that heavily borrowed from the singer’s iconic “Never Gonna Give You Up.” While Yung Gravy had licensed the underlying composition, Astley claimed Yung Gravy violated his right of publicity when he hired a singer who mimicked his distinctive voice.
That case has key differences from the situation with “Heart on My Sleeve,” like the allegation that Gravy falsely suggested to his listeners that Astley had actually endorsed his song. In the case of “Heart on My Sleeve,” the anonymous creator Ghostwriter omitted any reference to Drake and The Weeknd on streaming platforms; on TikTok, he directly stated that he, and not the two superstars, had created his song using AI.
But for Richard Busch of the law firm King & Ballow, a veteran music industry litigator who brought the lawsuit on behalf of Astley, the right of publicity and its protections for likeness still provides the most useful tool for artists and labels confronted with such a scenario in the future.
“If you are creating a song that sounds identical to, let’s say, Rihanna, regardless of what you say people are going to believe that it was Rihanna. I think there’s no way to get around that,” Busch said. “The strongest claim here would be the use of likeness.”
But do AI companies themselves break the law when they create programs that can so effectively mimic Drake and The Weeknd’s voices? That would seem to be the far larger looming crisis, and one without the same kind of relatively clear legal answers.
The fight ahead will likely be over how AI platforms are “trained” – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. From the point of view of many in the music industry, if that process is accomplished by feeding a platform copyrighted songs — in this case, presumably, recordings by Drake and The Weeknd — then those platforms and their owners are infringing copyrights on a mass scale.
In UMG’s statement Monday, the label said clearly that it believes such training to be a “violation of copyright law,” and the company previously warned that it “will not hesitate to take steps to protect our rights and those of our artists.” The RIAA has said the same, blasting AI companies for making “unauthorized copies of our members works” to train their machines.
While the training issue is legally novel and unresolved, it could be answered in court soon. A group of visual artists has filed a class action over the use of their copyrighted images to train AI platforms, and Getty Images has filed a similar case against AI companies that allegedly “scraped” its database for training materials.
And after this week’s incident over “Heart on My Sleeve,” a similar lawsuit against AI platforms filed by artists or music companies gets more likely by the day.
National Association of Broadcasters president and CEO Curtis LeGeyt spoke out on the potential dangers of Artificial Intelligence on Monday at the NAB Show in Las Vegas. “This is an area where NAB will absolutely be active,” he asserted of AI, which is one of the buzziest topics this week at the annual convention. “It is just amazing how quickly the relevance of AI to our entire economy — but specifically, since we’re in this room, the broadcast industry — has gone from amorphous concept to real.”
LeGeyt warned of several concerns that he has for local broadcasters, the first being issues surrounding “big tech” taking broadcast content and not fairly compensating broadcasters for its use. “We have been fighting for legislation to put some guardrails on it,” LeGeyt said. “AI has the potential to put that on overdrive. We need to ensure that our stations, our content creators are going to be fairly compensated.”
He added that he worries for journalists. “We’re already under attack for any slip-up we might have with regard to misreporting on a story. Well, you’re gonna have to do a heck of a lot more diligence to ensure that whatever you are reporting on is real, fact-based information and not just some AI bot that happens to look like Joe Biden.” Finally, he warned of images and likenesses being misappropriated where AI is involved.
“I want to wave the caution flag on some of these areas,” he said. “I think this could be really damaging for local broadcast.”
During his talk, he also outlines was he sees as potential opportunities. “My own view is there are some real potentially hyperlocal benefits to AI,” he said, citing as examples translation services and the ability to speed up research at “resource-constrained local stations.” He asserted, “Investigative journalism is never going to be replaced by AI. Our role at local community events, philanthropic work, is never going to be replaced by AI. But to the degree that we can leverage AI to do some of the things that are time-consuming and take away your ability to be boots on the ground doing the things that only you can do well, I think that’s a positive.”
Also addressed during the session was the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which may include capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a lot of moving parts and has a long way to go before its potential can be realized.
At NAB, FCC chairwoman Jessica Rosenworcel was on hand to announce the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0. “With over 60 percent of Americans already in range of a Next Gen TV signal, we are excited to work closely with all stakeholders, including the FCC, to bring Next Gen TV and all of its benefits to all viewers,” said LeGeyt.
During his session, LeGeyt also addressed “fierce competition for the dashboard” as part of a discussion of connected cars. “It’s not enough for any one [broadcaster] to innovate. If we are all not rowing in the same direction as an industry, … we are going to lose this arms race,” he warned.
Citing competition from the likes of Spotify, he contends that the local content offered by broadcasters gives them a “competitive advantage.”
The NAB Show runs through Wednesday.
This article was originally published by The Hollywood Reporter.
A new song believed to feature AI-generated fake vocals from Drake and The Weeknd that went viral over the weekend has been pulled from most streaming platforms after their label, Universal Music Group, released a statement Monday (April 17) condemning “infringing content created with generative AI.”
Released by an anonymous TikTok user called Ghostwriter977 and credited as Ghostwriter on steaming platforms where it racked up hundreds of thousands of streams, the track “Heart On My Sleeve” features uncannily similar voices to the two superstars — a trick that the creator says was accomplished by using artificial intelligence. It’s unclear if the entire song was created with AI, or just the soundalike vocals.
By Monday afternoon, the song had generated more 600,000 spins on Spotify, and Ghostwriter977’s TikTok videos had been viewed more than 15 million times. A YouTube video had another 275,000 views, with an ominous comment from the creator below it: “This is just the beginning.”
Many music fans seemed impressed. One comment on TikTok with more than 75,000 likes said it was the “first AI song that has actually impressed me.” Another said Ghostwriter was “putting out better drake songs than drake himself.” A third said AI was “getting dangerously good.”
But the end could already be in sight. At time of publishing on Monday evening, “Heart On My Sleeve” had recently been pulled from Spotify, as well as Apple Music, Deezer and TIDAL before it.
Even if short-lived, the sensational success of “Heart On My Sleeve” will no doubt underscore growing concerns over the impact of AI on the music industry. Last week, UMG urged streaming platforms like Spotify to block AI companies from accessing the label’s songs to “train” their machines, and the RIAA has warned that doing so infringes copyrights on a mass scale. Last month, a large coalition of industry organizations warned that AI technology should not be used to “replace or erode” human artistry.
Representatives for Drake and The Weeknd declined to comment on Monday. But in a statement to Billboard, UMG said the viral postings “demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” a UMG spokesman said in a statement. “We’re encouraged by the engagement of our platform partners on these issues – as they recognize they need to be part of the solution.”
UMG declined to comment on whether it had sent formal takedown requests to streaming services and social media websites.
Drake is in his feelings.
On Friday (April 14), the chart-topping artist took to Instagram to voice his opinion about AI-generated versions of his voice, particularly a video that features him rapping Bronx artist Ice Spice’s “Munch.”
“This is the last straw,” he wrote on his story, along with a post about the AI clip. The pairing of Drake with Ice Spice is particularly interesting, given the rappers’ history. While Drake was an early advocate of Ice Spice, born Isis Gaston, he unfollowed her on Instagram, something Gaston had no explanation for in interviews. However, shortly after, he re-followed her.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
Drake’s complaint comes after Universal Music Group asked streaming services including Spotify and Apple Music to prevent artificial intelligence companies from accessing their copyrighted songs. AI companies would use the music to “train” their machines, something that is becoming a cause for concern within the music industry.
In an email sent to Spotify, Apple Music and other streaming platforms, UMG said that it had become aware that certain AI services had been trained on copyrighted music “without obtaining the required consents” from those who own the songs.
“We will not hesitate to take steps to protect our rights and those of our artists,” UMG warned in the email, first obtained by the Financial Times. Billboard confirmed the details with sources on both sides. Although there isn’t clarity on what those steps would be or what streaming platforms can do to stop it, labels and artists alike seem aligned about a needed change.
UMG later issued a statement regarding the email sent to DSPs. “We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators. We expect our platform partners will want to prevent their services from being used in ways that harm artists,” it read.
Other AI covers making the rounds include Rihanna singing Beyoncé’s “Cuff It,” which sounded relatively believable, aside from a glitch during a melodic run.
While the implications of artificial intelligence poking its head into music can be scary for artists and labels alike, it’s hard not to smirk at Drizzy rapping, “A– too fat, can’t fit in no jeans.”
President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.
Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.
“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.
Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it’s the work of humans.
The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.
Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.
The U.S. so far has taken a different approach. The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.
The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.
Biden’s council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.
Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”
In a new open letter signed by Elon Musk, Steve Wozniak, Andrew Yang and more on Wednesday (March 29), leaders in technology, academia and politics came together to call for a moratorium on training AI systems “more advanced than Chat GPT-4” for “at least 6 months.”
The letter states that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” including the increased spread of propaganda and fake news as well as automation leading to widespread job loss. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.
By drawing the line at AI models “more advanced than Chat GPT-4,” the signees are likely pointing to generative artificial intelligence — a term encompassing a subset of AI that can create new content after being trained via the input of millions or even billions of pieces of data. While some companies license or create their own training data, a large number of AIs are trained using data sets scraped from the web that contain copyright-protected material, including songs, books, articles, images and more. This practice has sparked widespread debate over whether or not AI companies should be required to obtain consent or to compensate the rights holders, and whether the fast-evolving models will endanger the livelihoods of musicians, illustrators and other creatives.
Before late 2022, generative AI was little discussed outside of tech-savvy circles, but it has gained national attention over the last six months. Popular examples of generative AI today include image generators like DALLE-2, Stable Diffusion and Midjourney, which use simple text prompts to conjure up realistic pictures. Chatbots (also called Large Language Models or “LLMs”) like Chat GPT are also considered generative, as are machines that can create new music at the touch of a button. Though generative AI models in music have yet to make as many headlines as chatbots and image generators, companies like Boomy, Soundful, Beatlab, Google’s Magenta, Open AI and others are already building them, leading to fears that their output could one day threaten human-made music.
The letter urging the pause in AI training was signed by some of AI’s biggest executives. They notably include Stability AI CEO Emad Mostaque, Conjecture AI CEO Connor Leahy, Unanimous AI CEO and chief scientist Louis Rosenberg and Scale AI CEO Julien Billot. It was also signed by Pinterest co-founder Evan Sharp, Skype co-founder Jaan Tallinn and Ripple CEO Chris Larsen.
Other signees include several engineers and researchers at Microsoft, Google and Meta, though it notably does not include any names from Open AI, the firm behind the creation of Chat GPT-4.
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter continues. Rather, the industry must “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
The letter comes only a few weeks after several major organizations in the entertainment industry, including in music, came together to release a list of seven principles, detailing how they hope to protect and support “human creativity” in the wake of the AI boom. “Policymakers must consider the interests of human creators when crafting policy around AI,” the coalition wrote. “Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table.”
A wide coalition of music industry organizations have joined together to release a series of core principles regarding artificial intelligence — the first collective stance the entertainment business has taken surrounding the topic. Announced during the panel “Welcome to the Machine: Art in the Age of A.I.” held on Thursday (March 16) at South by Southwest (SXSW) and moderated by Billboard deputy editorial director Robert Levine, the principles reveal a growing sense of urgency by entertainment industry leaders to address the quickly-evolving issue.
“Over the past few months, I think [generative artificial intelligence] has gone from a ‘someday’ issue to a today issue,” said Levine. “It’s coming much quicker than anyone thought.”
In response to the fast-approaching collision of generative AI and the entertainment business, the principles detail the need for using the new technology to “empower human expression” while also asserting the importance of representing “creators’ interests…in policymaking” regarding the technology. Principles geared toward the latter include ensuring that AI developers acquire licenses for artistic works used in the “development and training of AI models” — and keep records of which works are used — and that governments refrain from creating “copyright or other IP exemptions” for the technology.
Among the 40 different groups that have joined the coalition — dubbed the Human Artistry Campaign — are music industry leaders including the Recording Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), SoundExchange, ASCAP, BMI and more.
Read the full list of principles below and get more information, including the full list of groups involved in the effort, here.
Core Principles for Artificial Intelligence Applications in Support of Human Creativity and Accomplishments:
Technology has long empowered human expression, and AI will be no different.
For generations, various technologies have been used successfully to support human creativity. Take music, for example… From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively.
Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.
Human-created works will continue to play an essential role in our lives.
Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.
Use of copyrighted works, and use of the voices and likenesses of professional performers, requires authorization, licensing, and compliance with all relevant state and federal laws.
We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes’ voices and likenesses must only be used with their consent and fair market compensation for specific uses.
Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.
AI must not receive exemptions from copyright law or other intellectual property laws and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.
Copyright should only protect the unique value of human intellectual creativity.
Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment -not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.
Trustworthiness and transparency are essential to the success of AI and protection of creators.
Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and methodology used to create it — informing consumer choices, and protecting creators and rightsholders.
Creators’ interests must be represented in policymaking.
Policymakers must consider the interests of human creators when crafting policy around AI. Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.
A new policy report from the U.S. Copyright Office says that songs and other artistic works created with the assistance of artificial intelligence can sometimes be eligible for copyright registration, but only if the ultimate author remains a human being.
The report, released by the federal agency on Wednesday (March 15), comes amid growing interest in the future role that could be played in the creation of music by so-called generative AI tools — similar to the much-discussed ChatGPT.
Copyright protection is strictly limited to content created by humans, leading to heated debate over the status of AI-generated works. In a closely-watched case last month, the Copyright Office decided that a graphic novel featuring AI-generated images was eligible for protection, but that the individual images couldn’t be protected.
In Wednesday’s report, the agency said that the use of AI tools was not an automatic ban on copyright registration, but that it would be closely scrutinized and could not play a dominant role in the creative process.
“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” the agency wrote. “For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the traditional elements of authorship are determined and executed by the technology — not the human user.”
The report listed examples of AI-aided works that might still be worthy of protection, like one that creatively combined AI-generated elements into something new, or a work that was AI-generated that an artist then heavily modified after the fact. And it stressed that other technological tools were still fair game.
“A visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording,” the report said. “In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.”
Under the rules laid out in the report, the Copyright Office said that anyone submitting such works must disclose which elements were created by AI and which were created by a human. The agency said that any AI-inclusive work that was previously registered without such a disclosure must be updated — and that failure to do so could result in the cancellation of the copyright registration.
Though aimed at providing guidance, Wednesday’s report avoided hard-and-fast rules. It stressed that analyzing copyright protection for AI-assisted works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances, including “how the AI tool operates” and “how it was used to create the final work.”
And the report didn’t even touch on a potentially thornier legal question: whether the creators of AI platforms infringe the copyrights of the vast number of earlier works that are used to “train” the platforms to spit out new works. In October, the Recording Industry Association of America (RIAA) warned that such providers were violating copyrights en masse by using existing music to train their machines.
“To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members works,” the RIAA said at the time.
Though Wednesday’s report did not offer guidance on that question, the Copyright Office said it had plans to weigh in soon.
“[The Office] has launched an agency-wide initiative to delve into a wide range of these issues,” the agency wrote. “Among other things, the Office intends to publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.”
Read the entire report here:
In the recent article “What Happens To Songwriters When AI Can Generate Music,” Alex Mitchell offers a rosy view of a future of AI-composed music coexisting in perfect barbershop harmony with human creators — but there is a conflict of interest here, as Mitchell is the CEO of an app that does precisely that. It’s almost like cigarette companies in the 1920s saying cigarettes are good for you.
Yes, the honeymoon of new possibilities is sexy, but let’s not pretend this is benefiting the human artist as much as corporate clients who’d rather pull a slot machine lever to generate a jingle than hire a human.
While I agree there are parallels between the invention of the synthesizer and AI, there are stark differences, too. The debut of the theremin — the first electronic instrument — playing the part of a lead violin in an orchestra was scandalous and fear-evoking. Audiences hated its sinusoidal wave lack of nuance, and some claimed it was “the end of music.” That seems ludicrous and pearl-clutching now, and I worship the chapter of electrified instruments afterward (thank you sister Rosetta Tharpe and Chuck Berry), but in a way, they were right. It was the closing of a chapter, and the birth of something new.
Is new always better, though? Or is there a sweet spot ratio of machine to human? I often wonder this sitting in my half analog, half digital studio, as the stakes get ever higher from flirting with the event horizon of technology.
In this same article, Diaa El All (another CEO of an A.I. music generation app), claims that drummers were pointlessly scared of the drum machine and sample banks replacing their jobs because it’s all just another fabulous tool. (Guess he hasn’t been to many shows where singers perform with just a laptop.) Since I have spent an indecent portion of my modeling money collecting vintage drum machines (cuz yes, they’re fabulous), I can attest to the fact I do indeed hire fewer drummers. In fact, since I started using sample libraries, I hire fewer musicians altogether. While this is a great convenience for me, the average upright bassist who used to be able to support his family with his trade now has to remain childless or take two other jobs.
Should we halt progress for maintaining placebo usefulness for obsolete craftsmen? No, change and competition are good, if not inevitable ergonomics. But let’s not be naive about the casualties.
The gun and the samurai come to mind. For centuries, samurai were part of an elite warrior class who rigorously trained in kendo (the way of the sword) and bushido (a moral code of honor and indifference to pain) since childhood. As a result, winning wars was a meritocracy of skill and strategy. Then a Chinese ship with Portuguese sailors showed up with guns.
When feudal lord Nobunaga saw the potential in these contraptions, he ordered hundreds be made for his troops. Suddenly a farmer boy with no skill could take down an archer or swordsman who had trained for years. Once more coordinated marching and reloading formations were developed, it was an entirely new power dynamic.
During the economic crunch of the Napoleonic wars, a similar tidal shift occurred. Automated textile equipment allowed factory owners to replace loyal employees with machines and fewer, cheaper, less skilled workers to oversee them. As a result of jobless destitution, there was a region-wide rebellion of weavers and Luddites burning mills, stocking frames and lace-making machines, until the army executed them and held show trials to deter others from acts of “industrial sabotage.”
The poet Lord Byron opposed this new legislation, which called machine-breaking a capital crime — ironic considering his daughter, Ada Lovelace, would go on to invent computers with Charles Babbage. Oh, the tangled neural networks we weave.
Look what Netflix did to Blockbuster rentals. Or what Napster did to the recording artist. Even what the democratization of homemade porn streaming did to the porn industry. More recently, video games have usurped films. You cannot add something to an ecosystem without subtracting something else. It would be like smartphone companies telling fax machine manufacturers not to worry. Only this time, the fax machines are humans.
Later in the article, Mac Boucher (creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes) adds another glowing review of bot- and button-based composition: “We will all become creators now.”
If everyone is a creator, is anyone really a creator?
An eerie vision comes to mind of a million TikTokers dressed as opera singers on stage, standing on the blueish corpses of an orchestra pit, singing over each other in a vainglorious cacophony, while not a single person sits in the audience. Just rows of empty seats reverberating the pink noise of digital narcissism back at them. Silent disco meets the Star Gate sequence’s death choir stack.
While this might sound like the bitter gatekeeping of a tape machine purist (only slightly), now might be a good time to admit I was one of the early projects to incorporate AI-generated lyrics and imagery. My band, Uni and The Urchins, has a morbid fascination with futurism and the wild west of Web 3.0. Who doesn’t love robots?
But I do think in order to make art, the “obstacles” actually served as a filtration device. Think Campbell’s hero’s journey. The learning curve of mastering an instrument, the physical adventure of discovering new music at a record shop or befriending the cool older guy to get his Sharpie-graffitied mix CD, saving up to buy your first guitar, enduring ridicule, the irrational desire to pursue music against the odds (James Brown didn’t own a pair of shoes until he 8 years old, and now is canonized as King.)
Meanwhile, in 2022, surveys show that many kids feel valueless unless they’re an influencer or “artist,” so the urge toward content creation over craft has become criminally easy, flooding the markets with more karaoke, pantomime and metric-based mush, rooted in no authentic movement. (I guess Twee capitalist-core is a culture, but not compared to the Vietnam war, slavery, the space race, the invention of LSD, the discovery of the subconscious, Indian gurus, the sexual revolution or the ’90s heroin epidemic all inspiring new genres.)
Not to sound like Ted Kaczynski’s manifesto, but technology is increasingly the hand inside the sock puppet, not the other way around.
Do I think AI will replace a lot of jobs? Yes, though not immediately, it’s still crude. Do I think this upending is a net loss? In the long term, no, it could incentivize us to invent entirely new skills to front-run it. (Remember when “learn to code” was an offensive meme?) In fact, I’m very eager to see how we co-evolve or eventually merge into a transhuman cyber Seraphim, once Artificial General Intelligence goes quantum.
But this will be a Faustian trade, have no illusions.
Charlotte Kemp Muhl is the bassist for NYC art-rock band UNI and the Urchins. She has directed all of UNI and The Urchins’ videos and mini-films and engineered, mixed and mastered their upcoming debut album Simulator (out Jan. 13, 2023, on Chimera Music) herself. UNI and the Urchins’ AI-written song/AI-made video for “Simulator” is out now.