artificial intelligence
Page: 11
The American Society of Composers, Authors and Publishers (ASCAP) argued that AI companies need to license material from copyright owners to train their models and that “a new federal right of publicity… is necessary to address the unprecedented scale on which AI tools facilitate the improper use of a creator’s image, likeness, and voice” in a document filed to the Copyright Office on Wednesday (Dec. 6).
The Copyright Office announced that it was studying “the copyright issues raised by generative artificial intelligence” in August and solicited written comments from relevant players in the space. Initial written comments had to be submitted by October 30, while reply comments — which give organizations like ASCAP the chance to push back against assertions made by AI companies like Anthropic and Open AI — were due December 6.
Generative AI models require training: They ingest large amounts of data to identify patterns. “AI training is a computational process of deconstructing existing works for the purpose of modeling mathematically how [they] work,” Google wrote in its reply comments for the Copyright Office. “By taking existing works apart, the algorithm develops a capacity to infer how new ones should be put together” — hence the “generative” part of this.
ASCAP represents songwriters, composers, and music publishers, and its chief concern is that AI companies will be allowed to train models on its members’ works without coming to some sort of licensing arrangement ahead of time. “Numerous comments from AI industry members raise doubts about the technical or economic feasibility of licensing as a model for the authorized use of protected content,” ASCAP writes. “But armchair speculations about the efficiency of licensing do not justify a rampant disregard for creators’ rights.”
ASCAP adds that “numerous large-scale AI tools have already been developed exclusively on the basis of fully licensed or otherwise legally obtained materials” — pointing to Boomy, Stable Audio, Generative AI by Getty Images, and Adobe Firefly — “demonstrating that the development of generative AI technologies need not come at the expense of creators’ rights.”
ASCAP also calls for the implementation of a new federal right-of-publicity law, worried that voice-cloning technology, for example, can threaten artists’ livelihood. “Generative AI technology introduces unprecedented possibilities for the unauthorized use of a creator’s image, likeness, and voice,” ASCAP argues. “The existing patchwork of state laws were not written with this technology in view, and do not adequately protect creators.”
“Without allowing the artists and creators to control their voice and likeness,” ASCAP continues, “this technology will create both consumer confusion and serious financial harm to the original music creators.”
LONDON — Representatives of the creative industries are urging legislators not to water down forthcoming regulations governing the use of artificial intelligence, including laws around the use of copyrighted music, amid fierce lobbying from big tech companies.
On Wednesday (Dec. 6), policy makers from the European Union Parliament, Council and European Commission will meet in Brussels to negotiate the final text of the EU’s Artificial Intelligence Act – the world’s first comprehensive set of laws regulating the use of AI.
The current version of the AI Act, which was provisionally approved by Members of European Parliament (MEPs) in a vote in June, contains several measures that will help determine what tech companies can and cannot do with copyright protected music works. Among them is the legal requirement that companies using generative AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 (classified by the EU as “general purpose AI systems”) provide summaries of any copyrighted works, including music, that they use to train their systems.
The draft legislation will also force developers to clearly identify content that is created by AI, as opposed to human works. In addition, tech companies will have to ensure that their systems are designed in such a way that prevents them from generating illegal content.
While these transparency provisions have been openly welcomed by music executives, behind the scenes technology companies have been actively lobbying policymakers to try and weaken the regulations, arguing that such obligations could put European AI developers at a competitive advantage.
“We believe this additional legal complexity is out of place in the AI Act, which is primarily focused on health, safety, and fundamental rights,” said a coalition of tech organizations and trade groups, including the Computer and Communications Industry Association, which counts Alphabet, Apple, Amazon and Meta among its members, in a joint statement dated Nov. 27.
In the statement, the tech representatives said they were concerned “about the direction of the current proposals to regulate” generative AI systems and said the EU’s proposals “do not take into account the complexity of the AI value chain.”
European lawmakers are also in disagreement over how to govern the nascent technology with EU member states France, Germany and Italy understood to be in favor of light touch regulation for developers of generative AI, according to sources close to the negotiations.
In response, music executives are making a final pitch to legislators to ensure that AI companies respect copyright laws and strengthen existing protections against the unlawful use of music in training AI systems.
Helen Smith, the executive chair of IMPALA. /
Lea Fery
Helen Smith, executive chair of European independent labels group IMPALA, tells Billboard that the inclusion of “meaningful transparency and record keeping obligations” in the final legislation is a “must for creators and rightsholders” if they are to be able to effectively engage in licensing negotiations.
In a letter sent to EU ambassadors last week, Björn Ulvaeus, founder member of ABBA and president of CISAC, the international trade organization for copyright collecting societies, warned policymakers that “without the right provisions requiring transparency, the rights of the creator to authorise and get paid for use of their works will be undermined and impossible to implement.”
The European Composer and Songwriter Alliance (ECSA), International Federation of Musicians (FIM) and International Artist Organisation (IAO) are also calling for guarantees that the rights of their members are respected.
If legislators fail to reach a compromise agreement at Wednesday’s fifth and planned-to-be-final negotiating session on the AI Act, there are a number of possible outcomes, including further ‘trologue’ talks the following week. If a deal doesn’t happen this month, however, there is the very real risk that the AI Act won’t be passed before the European parliamentary elections take place in June.
If that happens, a new parliament could theoretically scrap the bill altogether, although executives closely monitoring events in Brussels, the de facto capital of the European Union, say that is unlikely to happen and that there is strong political will from all sides to find a resolution before the end of the year when the current Spain-led presidency of the EU Council ends.
Because the AI Act is a regulation and not a directive — such as the equally divisive and just-as-fiercely-lobbied 2019 EU Copyright Directive — it would pass directly into law in all 27 EU member states, although only once it has been fully approved by the different branches of the European government via a final vote and officially entered into force (the exact timeframe of which could be determined in negotiations, but could take up to three years).
In that instance, the act’s regulations will apply to any company that operates in the European Union, regardless of where they are based. Just as significant, if passed, the act will provide a world-first legislative model to other governments and international jurisdictions looking to draft their own laws on the use of artificial intelligence.
“It is important to get this right,” says IMPALA’s Smith, “and seize the opportunity to set a proper framework around these [generative AI] models.”
In April, Grimes encouraged artists to make music using her voice — as replicated by artificial intelligence-powered technology. Even as she embraced a high-tech future, however, she noted that there were some old-fashioned legal limitations. “I don’t own the rights to the vocals from my old albums,” she wrote on X. “If you make remixes, they may get taken down.”
Artificial intelligence has dominated the hype cycle in 2023. But most signed artists who are enthusiastic about testing out this technology will have to move cautiously, wary of the fact that preexisting contracts may assert some level of control over how they can use their voice. “In general, in a major label deal, they’re the exclusive label for name, likeness and voice under the term,” says one veteran manager who spoke on the condition of anonymity. “Labels might be mad if artists went around them and did a deal themselves. They might go, ‘Hey, wait a minute, we have the rights to this.’”
On the flip side, labels probably can’t (or won’t) move unilaterally either. “In our agreements, in a handful of territories, we’ve been getting exclusive name, image, likeness and voice rights in connection with recordings for years,” says one major label source. That said, “as a practical matter, we wouldn’t license an artist’s voice for a voice model or for any project without the artists being on board with it. It would be bad business for us.”
For the moment, both sides are inching forward, trying to figure out how to “interpret new technology with arcane laws,” as Arron Saxe, who manages several artists’ estates, puts it. “It’s an odd time because the government hasn’t stepped in and put down real guidelines around AI,” adds Dan Smith, general manager of the dance label Armada Music.
That means guidelines must be drawn via pre-existing contracts, most of which were not written with AI in mind, and often vary from one artist to the next. Take a recent artist deal sent out by one major label and reviewed by Billboard: Under the terms, the label has the “exclusive right to record Artist Performances” with “performance” broadly defined to include “singing, speaking… or such performance itself, as the context requires.” The word “recording” is similarly roomy: “any recording of sound…by any method and on any substance or material, whether now or hereafter known.”
Someone in this deal probably couldn’t easily go rogue and build a voice-cloning model on newly recorded material without permission. Even to participate in YouTube’s recently announced AI voice generation experiment, some artists needed to get permission in form of a “label waiver,” according to Audrey Benoualid, a partner at Myman Greenspan Fox Rosenberg Mobasser Younger & Light. (In an interview about YouTube’s new feature, Demis Hassabis, CEO of Google Deepmind, said only that it has “been complicated” to negotiate deals with various music rights holders.) Even after an artist’s deal ends, if their recordings remain with a label, they would have to be careful to only train voice-cloning tech with material that isn’t owned exclusively by their former record company.
It’s not just artists that are interested in AI opportunities, though. Record labels stand to gain from developing licensing deals with AI companies for their entire catalogs, which could in turn bring greater opportunities for artists who want to participate. At the Made on YouTube event in September, Warner Music Group CEO Robert Kyncl said it’s the label’s “job” to make sure that artists who lean into AI “benefit.” At the same time, he added, “It’s also our job together to make sure that artists who don’t want to lean in are protected.”
In terms of protections, major label deals typically come with a list of approval rights: Artists will ask that they get the chance to sign off on any sample of their recordings or the use of one of their tracks in a movie trailer. “We believe that any AI function is just another use of the talents’ intellectual property that would take some approval by the creator,” explains Leron Rogers, a partner at Fox Rothschild.
In many states, artists also have protection under the “right of publicity,” which says that people have control over the way others can exploit their individual identities. “Under that umbrella is where things like the right to your voice, your face, your likeness are protected and can’t be mimicked because it’s unfair competition,” says Lulu Pantin, founder of Loop Legal. “But because those laws are not federal, they’re inconsistent, and every state’s laws are slightly different” — not all states specifically call out voices, for example — “[so] there’s concern that that’s not going to provide robust protection given how ubiquitous AI has become already.” (A lack of federal law also limits the government’s ability to push for enforcement abroad.)
To that end, a bipartisan group of senators recently introduced a draft proposal of the NO FAKES act (“Nurture Originals, Foster Art, and Keep Entertainment Safe”), which would enshrine a federal right for artists, actors and others to take legal action against anyone who creates unauthorized “digital replicas” of their image, voice, or likeness. “Artists would now gain leverage they didn’t have before,” says Mike Pelczynski, who serves on the advisory board of the company voice-swap.ai.
While the entertainment industry tracks NO FAKES’ progress, Smith from Armada believes “we will probably start to see more artist agreements that are addressing the use of your voice.” Sure enough, Benoualid says that in new label deals for her clients, she now asks for approval over any use of an artist’s name, likeness, or voice in connection with AI technology. “Express written approval should be required prior to a company reproducing vocals, recordings, or compositions for the purpose of training AI platforms,” agrees Matthew Gorman, a lawyer at Cox & Palmer.
Pantin has been keeping an eye on the way other creative fields are handling this fast-evolving tech to see if there are lessons that can be imported into music. “One thing that I’ve been trying to do and I’ve had success in some instances with is asking the rights holders — the publishers, the labels — for consent rights from the individual artists or songwriter before their work is used to train generative AI,” she says. “On the book publishing side, the Authors Guild has put forth language they recommended are included in all publishing agreements, and so I’m drawing from that and extending that to songwriting.”
All these discussions are new, and the long-term impact of AI-driven technology on the creative fields remains unclear. Daouda Leonard, who manages Grimes, is adamant that in the music industry’s near future, “the licensing of voice is going to become a valuable asset.” Other are less sure — “nobody really knows how important this will be,” the major label source says.
Perhaps Grimes put it best on X: “We expect a certain amount of chaos.”
Dennis Kooker, president of global digital business at Sony Music Entertainment, represented the music business at Sen. Chuck Schumer’s (D-NY) seventh artificial intelligence insight forum in Washington, D.C. on Wednesday (Nov. 29). In his statement, Kooker implored the government to act on new legislation to protect copyright holders to ensure the development of “responsible and ethical generative AI.”
The executive revealed that Sony has already sent “close to 10,000 takedowns to a variety of platforms hosting unauthorized deepfakes that SME artists asked us to take down.” He says these platforms, including streamers and social media sites, are “quick to point to the loopholes in the law as an excuse to drag their feet or to not take the deepfakes down when requested.”
Presently, there is no federal law that explicitly requires platforms to takedown songs that impersonate an artists’ voice. Platforms are only obligated to do this when a copyright (a sound recording or a musical work) is infringed, as stipulated by the Digital Millennium Copyright Act (DMCA). Interest in using AI to clone the voices of famous artists has grown rapidly since a song with AI impersonations of Drake and The Weekend went viral earlier this year. The track, called “Heart on My Sleeve” has become one of the most popular use-cases of music-related AI.
A celebrity’s voice and likeness can be protected by “right of publicity” laws that safeguard it from unauthorized exploitation, but this right is limited. Its protections vary state-to-state and are even more limited post-mortem. In May, Billboard reported that the major labels — Sony, Universal Music Group and Warner Music Group — had been in talks with Spotify, Apple Music and Amazon Music to create a voluntary system for takedowns of right of publicity violations, much like the one laid out by the DMCA, according to sources at all three majors. It is unclear from Kooker’s remarks if the platforms that are dragging their feet on voice clone removals include the three streaming services that previously took part in these discussions.
In his statement, Kooker asked the Senate forum to create a federal right of publicity to create a stronger and more uniform protection for artists. “Creators and consumers need a clear unified right that sets a floor across all fifty states,” he said. This echoes what UMG general counsel/ executive vp of business and legal affairs Jeffery Harleston asked the Senate during a July AI hearing.
Kooker expressed his “sincere gratitude” to Sens. Chris Coons, Marsha Blackburn, Amy Klobuchar and Thom Tillis for releasing a draft bill called the No FAKES (“Nurture Originals, Foster Art, and Keep Entertainment Safe”) Act in October, which would create a federal property right for one’s voice or likeness and protect against unauthorized AI impersonations. At its announcement, the No FAKES Act drew resounding praise from music business organizations, including the RIAA and the American Association of Independent Music.
Kooker also stated that in this early stage many available generative AI products today are “not expanding the business model or enhancing human creativity.” He pointed to a “deluge of 100,000 new recordings delivered to [digital service providers] every day” and said that some of these songs are “generated using generative AI content creation tools.” He added, “These works flood the current music ecosystem and compete directly with human artists…. They reduce and diminish the earnings of human artists.”
“We have every reason to believe that various elements of AI will become routine in the creative process… [as well as] other aspects of our business” like marketing and royalty accounting,” Kooker continued. He said Sony Music has already started “active conversations” with “roughly 200” different AI companies about potential partnerships with Sony Music.
Still, he stressed five key issues remain that need to be addressed to “assure a thriving marketplace for AI and music.” Read his five points, as written in his prepared statement, below:
Assure Consent, Compensation, and Credit. New products and businesses built with music must be developed with the consent of the owner and appropriate compensation and credit. It is essential to understand why the training of AI models is being done, what products will be developed as a result, and what the business model is that will monetize the use of the artist’s work. Congress and the agencies should assure that creators’ rights are recognized and respected.
Confirm That Copying Music to Train AI Models is Not Fair Use. Even worse are those that argue that copyrighted content should automatically be considered fair use so that protected works are never compensated for usage and creators have no say in the products or business models that are developed around them and their work. Congress should assure and agencies should presume that reproducing music to train AI models, in itself, is not a fair use.
Prevent the Cloning of Artists’ Voices and Likenesses Without Express Permission. We cannot allow an artist’s voice or likeness to be cloned for use without the express permission of the artist. This is a very personal decision for the artist. Congress should pass into law effective federal protections for name, image, and likeness.
Incentivize Accurate Record-Keeping. Correct attribution will be a critical element to artists being paid fairly and correctly for new works that are created. In addition, rights can only be enforced around the training of AI when there are accurate records about what is being copied. Otherwise, the inability to enforce rights in the AI marketplace equates to a lack of rights at all, producing a dangerous imbalance that prevents a thriving ecosystem. This requires strong and accurate record keeping by the generative AI platforms, a requirement that urgently needs legislative support to ensure incentives are in place so that it happens consistently and correctly.
Assure Transparency for Consumers and Artists. Transparency is necessary to clearly distinguish human-created works from AI-created works. The public should know, when they are listening to music, whether that music was created by a human being or a machine.
Most conversations around AI in music are focused on music creation, protecting artists and rightsholders, and differentiating human-made music from machine-made works. And there is still discourse to be had as AI has some hidden superpowers waiting to be explored. One use for the technology that has immense potential to positively impact artists is music marketing.
As generative and complementary AI is becoming a larger part of creative works in music, marketing will play a larger role than ever before. Music marketing isn’t just about reaching new and existing fans and promoting upcoming singles. Today, music marketing must establish an artist’s ownership of their work and ensure that the human creatives involved are known, recognized, and appreciated. We’re about to see the golden age of automation for artists who want to make these connections and gain this appreciation.
While marketing is a prerequisite to a creator’s success, it takes a lot of time, energy, and resources. Creating engaging content takes time. According to Linktree’s 2023 Creator Report, 48% of creators who make $100-500k per year spend more than 10 hours on content creation every week. On top of that, three out of four creators want to diversify what they create but feel pressure to keep making what is rewarded by the algorithm. Rather than fighting the impossible battle of constantly evolving and cranking out more content to match what the algorithm is boosting this week, creatives can have a much greater impact by focusing on their brand and making high-quality content for their audience.
For indie artists without support from labels and dedicated promotion teams, the constant pressure to push their new single on TikTok, post on Instagram, and engage with fans while finding the time to make new music is overwhelming. The pressure is only building, thanks to changes in streaming payouts. Indie artists need to reach escape velocity faster.
Megh Vakharia
AI-powered music marketing can lighten that lift–generating campaign templates and delivering to artists the data they need to reach their intended audience. AI can take the data that artists and creators generate and put it to work in a meaningful way, automatically extracting insights from the information and analytics to build marketing campaigns and map out tactics that get results.
AI-driven campaigns can give creators back the time they need to do what they do best: create. While artificial intelligence saves artists time and generates actionable solutions for music promotion, it is still highly dependent on the artist’s input and human touch. Just as a flight captain has to set route information and parameters before switching on autopilot, an artist enters their content, ideas, intended audience, and hopeful outcome of the marketing campaign. Then, using this information, the AI-powered marketing platform can provide all of the data and suggestions necessary to produce the targeted results.
Rather than taking over the creative process, AI should be used to assist and empower artists to be more creative. It can help put the joy back into what can be a truly fun process — finding, reaching, and engaging with fans.
A large portion of artists who have tapped into AI marketing have never spent money on marketing before, but with the help of these emerging tools, planning and executing effective campaigns is more approachable and intuitive. As the music industry learns more about artificial intelligence and debates its ethical implications in music creation, equal thought must be given to the opportunities that it unlocks for artists to grow their fanbases, fuel more sustainable careers, and promote their human-made work.
Megh Vakharia is the co-founder and CEO of SymphonyOS, the AI-powered marketing platform empowering creatives to build successful marketing campaigns that generate fan growth using its suite of smart, automated marketing tools.
Earlier this month, 760 stations owned by iHeartMedia simultaneously threw their weight behind a new single: The Beatles’ “Now and Then.” This was surprising, because the group broke up in 1970 and two of the members are dead. “Now and Then” began decades ago as a home recording by John Lennon; more recently, AI-powered audio technology allowed for the separation of the demo’s audio components — isolating the voice and the piano — which in turn enabled the living Beatles to construct a whole track around them and roll it out to great fanfare.
“For three days, if you were a follower of popular culture, all you heard about was The Beatles,” says Arron Saxe, who represents several estates, including Otis Redding’s and Bill Withers’s. “And that’s great for the business of the estate of John Lennon and the estate of George Harrison and the current status of the two living legends.”
For many people, 2023 has been the year that artificial intelligence technology left the realm of science fiction and crashed rudely into daily life. And while AI-powered tools have the potential to impact wide swathes of the music industry, they are especially intriguing for those who manage estates or the catalogs of dead artists.
That’s because there are inherent constraints involved with this work: No one is around to make new stuff. But as AI models get better, they have the capacity to knit old materials together into something that can credibly pass as new — a reproduction of a star’s voice, for example. “As AI develops, it may impact the value of an estate, depending on what assets are already in the estate and can be worked with,” says Natalia Nataskin, chief content officer for Primary Wave, who estimates that she and her team probably spend around 25% of their time per week mulling AI (time she says they used to spend contemplating possibilities for NFTs).
And a crucial part of an estate manager’s job, Saxe notes, is “looking for opportunities to earn revenue.” “Especially with my clients who aren’t here,” he adds, “you’re trying to figure out, how do you keep it going forward?”
The answer, according to half a dozen executives who work with estates or catalogs of dead artists or songwriters, is “very carefully.” “We say no to 99 percent of opportunities,” Saxe says.
“You have this legacy that is very valuable, and once you start screwing with it, you open yourself up to causing some real damage,” adds Jeff Jampol, who handles the estates of The Doors, Janis Joplin and more. “Every time you’re going to do something, you have to be really protective. It’s hard to be on the bleeding edge.”
To work through these complicated issues, WME went so far as to establish an AI Task Force where agents from every division educate themselves on different platforms and tools to “get a sense for what is out there and where there are advantages to bring to our clients,” says Chris Jacquemin, the company’s head of digital strategy. The task force also works with WME’s legal department to gain “some clarity around the types of protections we need to be thinking about,” he continues, as well as with the agency’s legislative division in Washington, D.C.
At the moment, Jampol sees two potentially intriguing uses of AI in his work. “It would be very interesting to have, for instance, Jim Morrison narrate his own documentary,” he explains. He could also imagine using an AI voice model to read Morrison’s unrecorded poetry. (The Doors singer did record some poems during his lifetime, suggesting he was comfortable with this activity.)
On Nov. 15, Warner Music Group announced a potentially similar initiative, partnering with the French great Edith Piaf’s estate to create a voice model — based on the singer’s old interviews — which will narrate the animated film Edith. The executors of Piaf’s estate, Catherine Glavas and Christie Laume, said in a statement that “it’s been a special and touching experience to be able to hear Edith’s voice once again — the technology has made it feel like we were back in the room with her.”
The use of AI tech to recreate a star’s speaking voice is “easier” than attempting to put together an AI model that will replicate a star singing, according to Nataskin. “We can train a model on only the assets that we own — on the speaking voice from film clips, for example,” she explains.
In contrast, to train an AI model to sing like a star of old, the model needs to ingest a number of the artist’s recordings. That requires the consent of other rights holders — the owners of those recordings, which may or may not be the estate, as well as anyone involved in their composition. Many who spoke to Billboard for this story said they were leery of AI making new songs in the name of bygone legends. “To take a new creation and say that it came from someone who isn’t around to approve it, that seems to me like quite a stretch,” says Mary Megan Peer, CEO of the publisher peermusic.
Outside the United States, however, the appetite for this kind of experimentation may differ. Roughly a year ago, the Chinese company Tencent Music Entertainment told analysts that it used AI-powered technology to create new vocal tracks from dead singers, one of which went on to earn more than 100 million streams.
For now, at least, Nataskin characterized Primary Wave as focused on “enhancing” with AI tech, “rather than creating something from scratch.” And after Paul McCartney initially mentioned that artificial intelligence played a role in “Now and Then,” he quickly clarified on X that “nothing has been artificially or synthetically created,” suggesting there is still some stigma around the use of AI to generate new vocals from dead icons. The tech just “cleaned up some existing recordings,” McCartney noted.
This kind of AI use for “enhancing” and “cleaning up,” tweaking and adjusting has already been happening regularly for several years. “For all of the industry freakout about AI, there’s actually all these ways that it’s already operating everyday on behalf of artists or labels that isn’t controversial,” says Jessica Powell, co-founder and CEO of Audioshake, a company that uses AI-powered technology for stem separation. “It can be pretty transformational to be able to open up back catalog for new uses.”
The publishing company peermusic used AI-powered stem separation to create instrumentals for two tracks in its catalog — Gaby Moreno’s “Fronteras” and Rafael Solano’s “Por Amor” — which could then be placed in ads for Oreo and Don Julio, respectively. Much like the Beatles, Łukasz Wojciechowski, co-founder of Astigmatic Records, used stem separation to isolate, and then remove distortion from, the trumpet part in a previously unreleased recording he found of jazz musician Tomasz Stanko. After the clean up, the music could be released for the first time. “I’m seeing a lot of instances with older music where the quality is really poor, and you can restore it,” Wojciechowski says.
Powell acknowledges that these uses are “not a wild proposition like, ‘create a new voice for artist X!’” Those have been few and far between — at least the authorized ones. (Hip-hop fans have been using AI-powered technology to turn snippets of rap leaks from artists like Juice WRLD, who died in 2019, into “finished” songs.) For now, Saxe believes “there hasn’t been that thing where people can look at it and go, ‘They nailed that use of it.’ We haven’t had that breakout commercial popular culture moment.”
It’s still early, though. “Where we go with things like Peter Tosh or Waylon Jennings or Eartha Kitt, we haven’t decided yet,” says Phil Sandhaus, head of WME Legends division. “Do we want to use voice cloning technologies out there to create new works and have Eartha Kitt in her unique voice sing a brand new song she’s never sung before? Who knows? Every family, every estate is different.”
Additional reporting by Melinda Newman
Listeners remain wary of artificial intelligence, according to Engaging with Music 2023, a forthcoming report from the International Federation of the Phonographic Industry (IFPI) that seems aimed in particular at government regulators.
The IFPI surveyed 43,000 people across 26 countries, coming to the conclusion that 76% of respondents “feel that an artist’s music or vocals should not be used or ingested by AI without permission,” and 74% believe “AI should not be used to clone or impersonate artists without authorisation.”
The results are not surprising. Most listeners probably weren’t thinking much, if at all, about AI and its potential impacts on music before 2023. (Some still aren’t thinking about it: 89% of those surveyed said they were “aware of AI,” leaving 11% who have somehow managed to avoid a massive amount of press coverage this year.) New technologies are often treated with caution outside the tech industry.
It’s also easy for survey respondents to support statements about getting authorization for something before doing it — that generally seems like the right thing to do. But historically, artists haven’t always been interested in preemptively obtaining permission.
Take the act of sampling another song to create a new composition. Many listeners would presumably agree that artists should go through the process of clearing a sample before using it. In reality, however, many artists sample first and clear later, sometimes only if they are forced to.
In a statement, Frances Moore, IFPI’s CEO, said that the organization’s survey serves as a “timely reminder for policymakers as they consider how to implement standards for responsible and safe AI.”
U.S. policymakers have been moving slowly to develop potential guidelines around AI. In October, a bipartisan group of senators released a draft of the NO FAKES Act, which aims to prevent the creation of “digital replicas” of an artist’s image, voice, or visual likeness without permission.
“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Senator Chris Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”
When Dierks Bentley’s band is looking for something to keep it occupied during long bus rides, the group has, at times, turned to artificial intelligence apps, asking them to create album reviews or cover art for the group’s alter ego, The Hot Country Knights.
“So far,” guitarist Charlie Worsham says, “AI does not completely understand The Hot Country Knights.”
By the same token, Music Row doesn’t completely understand AI, but the developing technology is here, inspiring tech heads and early adaptors to experiment with it, using it to get a feel, for example, for how Bentley’s voice might fit a new song or to kick-start a verse that has the writer stumped. But it has also inspired a palpable amount of fear among artists anticipating their voices will be misused and among musicians who feel they’ll be completely replaced.
“As a songwriter, I see the benefit that you don’t have to shell out a ton of money for a demo singer,” one attendee said during the Q&A section of an ASCAP panel about AI on Nov. 7. “But also, as a demo singer, I’m like, ‘Oh, shit, I’m out of a job.’”
That particular panel, moderated by songwriter-producer Chris DeStefano (“At the End of a Bar,” “That’s My Kind of Night”), was one of three AI presentations that ASCAP hosted at Nashville’s Twelve Thirty Club the morning after the ASCAP Country Music Awards, hoping to educate Music City about the burgeoning technology. The event addressed the creative possibilities ahead, the evolving legal discussion around AI and the ethical questions that it raises. (ASCAP has endorsed six principles for AI frameworks here).
The best-known examples of AI’s entry into music have revolved around the use of public figures’ voices in novel ways. Hip-hop artist Drake, in one prominent instance, had his voice re-created in a cover of “Bubbly,” originated by Colbie Caillat, who released her first country album, Along the Way, on Sept. 22.
“Definitely bizarre,” Caillat said during CMA Week activities. “I don’t think it’s good. I think it makes it all too easy.”
But ASCAP panelists outlined numerous ways AI can be employed for positive uses without misappropriating someone’s voice. DeStefano uses AI program Isotope, which learned his mixing tendencies, to elevate his tracks to “another level.” Independent hip-hop artist Curtiss King has used AI to handle tasks outside of his wheelhouse that he can’t afford to outsource, such as graphic design or developing video ideas for social media. Singer-songwriter Anna Vaus instructed AI to create a 30-day social media campaign for her song “Halloween on Christmas Eve” and has used it to adjust her bio or press releases — “stuff,” she says, “that is not what sets my soul on fire.” It allows her more time, she said, for “sitting in my room and sharing my human experiences.”
All of this forward motion is happening faster in some other genres than it is in country, and the abuses — the unauthorized use of Drake’s voice or Tom Cruise’s image — have entertainment lawyers and the Copyright Office playing catch-up. Those examples test the application of the fair use doctrine in copyright law, which allows creators to play with existing copyrights. But as Sheppard Mullin partner Dan Schnapp pointed out during the ASCAP legal panel, fair use requires the new piece to be a transformative product that does not damage the market for the original work. When Drake’s voice is being applied without his consent to a song he has never recorded and he is not receiving a royalty, that arguably affects his marketability.
The Copyright Office has declined to offer copyright protection for AI creations, though works that are formed through a combination of human and artificial efforts complicate the rule. U.S. Copyright Office deputy general counsel Emily Chapuis pointed to a comic book composed by a human author who engaged AI for the drawings. Copyright was granted to the text, but not the illustrations.
The legal community is also sorting through rights to privacy and so-called “moral rights,” the originator’s ability to control how a copyright is used.
“You can’t wait for the law to catch up to the tech,” Schnapp said during the legal panel. “It never has and never will. And now, this is the most disruptive technology that’s hit the creative industry, generally, in our lifetime. And it’s growing exponentially.”
Which has some creators uneasy. Carolyn Dawn Johnson asked from the audience if composers should stop using their phones during writing appointments because ads can track typed and spoken activity, thus opening the possibility that AI begins to draw on content that has never been included in copyrighted material. The question was not fully answered.
But elsewhere, Nashville musicians are beginning to use AI in multiple ways. Restless Road has had AI apply harmonies to songwriter demos to see if a song might fit its sound. Elvie Shane, toying with a chatbot, developed an idea that he turned into a song about the meth epidemic, “Appalachian Alchemy.” Chase Matthew’s producer put a version of his voice on a song to convince him to record it. Better Than Ezra’s Kevin Griffin, who co-wrote Sugarland’s “Stuck Like Glue,” has asked AI to suggest second verses on songs he was writing — the verses are usually pedestrian, but he has found “one nugget” that helped finish a piece.
The skeptics have legitimate points, but skeptics also protested electronic instruments, drum machines, CDs, file sharing and programmed tracks. The industry has inevitably adapted to those technologies. And while AI is scary, early adopters seem to think it’s making them more productive and more creative.
“It’s always one step behind,” noted King. “It can make predictions based upon the habits that I’ve had, but there’s so many interactions that I have because I’m a creative and I get creative about where I’m going next … If anything, AI has given me like a kick in the butt to be more creative than I’ve ever been before.”
Songwriter Kevin Kadish (“Whiskey Glasses,” “Soul”) put the negatives of AI into a bigger-picture perspective.
“I’m more worried about it for like people’s safety and all the scams that happen on the phone,” he said on the ASCAP red carpet. “Music is the least of our worries with AI.”
Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.
The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
San Francisco-based OpenAI said in a statement late Tuesday: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”
The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.
OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.
The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.
Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal. That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.
One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.
Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.
“We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”
In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”
Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.
“We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”
Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”
Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.
“Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.
“It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people,” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.
As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”
“Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Sean “Diddy” Combs is accused of rape amid an ongoing wave of music industry sexual abuse lawsuits; Shakira settles her $15 million tax evasion case on the eve of trial; UMG defeats a lawsuit filed by artists over its lucrative ownership stake in Spotify; and more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
THE BIG STORY: Diddy Sued As Music #MeToo Wave Continues
Following a string of abuse cases against powerful men in the music industry, Sean “Diddy” Combs was sued by R&B singer and longtime romantic partner Cassie over allegations of assault and rape — and then settled the case just a day later.
In a graphic complaint, attorneys for Cassie (full name Casandra Ventura) claimed she “endured over a decade of his violent behavior and disturbed demands,” including repeated physical beatings and forcing her to “engage in sex acts with male sex workers” while he masturbated. Near the end of their relationship, Ventura claimed that Combs “forced her into her home and raped her while she repeatedly said ‘no’ and tried to push him away.”
Combs immediately denied the allegations as “offensive and outrageous.” He claimed Cassie had spent months demanding $30 million to prevent her from writing a tell-all book, a request he had “unequivocally rejected as blatant blackmail.”
Read the full story on the lawsuit here.
Just a day after it was filed, Combs and Ventura announced that they had reached a settlement to resolve the case. Though quick settlements can happen in any type of lawsuit, it’s pretty unusual to see a case with such extensive and explosive allegations end just 24 hours after it was filed in court. “I wish Cassie and her family all the best,” Combs said in a statement. “Love.”
Both sides quickly put their spin on the settlement. A former staffer at Cassie’s law firm sent out a statement arguing that the quick resolution was “practically unheard of” and suggesting it showed the “evidence against Mr. Combs was overwhelming.” Combs’ lawyer, Ben Brafman, put out his own statement reiterating that a settlement — “especially in 2023” — was “in no way an admission of wrongdoing.”
Read the full story on the settlement here.
The case against Combs is the most explosive sign yet that, six years after the start of the #MeToo movement, the music industry is currently experiencing something of a second iteration.
Sexual assault lawsuits were filed earlier this month against both former Recording Academy president/CEO Neil Portnow and label exec Antonio “L.A.” Reid, and in October longtime publishing exec Kenny MacPherson was sued for sexual harassment. Before that, sexual misconduct allegations were leveled at late Atlantic Records co-founder Ahmet Ertegun; Backstreet Boys member Nick Carter; singer Jason Derulo; and ex-Kobalt exec Sam Taylor.
Many of the recent cases have been filed under New York’s Adult Survivors Act, a statute that created a limited window for alleged survivors to take legal action over years-old accusations that would typically be barred under the statute of limitations. With that look-back period set to end on Thursday (Nov. 23), more cases could be coming in the next few days. Stay tuned…
Other top stories this week…
UMG WINS CASE OVER SPOTIFY STAKE – A federal judge dismissed a class action against Universal Music Group that challenged the fairness of its 2008 purchase of shares in Spotify. The case, filed by ’90s hip-hop duo Black Sheep, accused the company of taking lower-than-market royalty rates in return for a chunk of equity that’s now worth hundreds of millions. But the judge ruled that such a maneuver — even if proven true — wouldn’t have violated UMG’s contract with its artists.
A$AP ROCKY TO STAND TRIAL – A Los Angeles judge ruled that there was enough evidence for A$AP Rocky to stand trial on felony charges that he fired a gun at a former friend and collaborator outside a Hollywood hotel in 2021. The 35-year-old hip-hop star’s lawyer vowed that “Rocky is going to be vindicated when all this is said and done, without question.”
SHAKIRA SETTLES TAX CASE – The Columbian superstar agreed to a deal with Spanish authorities to settle her $15 million criminal tax fraud case that could have resulted in a significant prison sentence for the singer. After maintaining her innocence for five years, Shakira settled on the first day of a closely-watched trial: “I need to move past the stress and emotional toll of the last several years and focus on the things I love,” she said.
ROD WAVE MERCH CRACKDOWN – The rapper won a federal court order empowering law enforcement to seize bootleg merchandise sold outside his Charlotte, N.C., concert, regardless of who was selling it. He’s the latest artist to file such a case to protect ever-more-valuable merch revenue following Metallica, SZA, Post Malone and many others.
MF DOOM NOTEBOOK BATTLE – Attorneys for Eothen “Egon” Alapatt fired back at a lawsuit that claims he stole dozens of private notebooks belonging to the late hip-hop legend MF Doom, calling the case “baseless and libelous” and telling his side of the disputed story.
“THE DAMAGE WILL BE DONE” – Universal Music Group asked for a preliminary injunction that would immediately block artificial intelligence company Anthropic PBC from using copyrighted music to train future AI models while their high-profile case plays out in court.
DIDDY TEQUILA CASE – In a separate legal battle involving Diddy, a New York appeals court hit pause on his lawsuit against alcohol giant Diageo that accused the company of racism and failing to adequately support his DeLeon brand of tequila. The court stayed the case while Diageo appeals a key ruling about how the dispute should proceed.