State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show

State Champ Radio Mix

8:00 pm 12:00 am

Current show

State Champ Radio Mix

8:00 pm 12:00 am


artificial intelligence

Page: 5

More people around the globe are listening to licensed music services than ever before — and are doing so via a growing number of different platforms — but piracy continues to divert cash from creators’ pockets, while a majority of music fans think that artificial intelligence (AI) should not be used to clone music artists’ voices without authorization, according to a new consumer survey from international recorded-music trade organization IFPI. 
IFPI’s “Engaging with Music 2023″ study reveals that music consumers are spending on average 20.7 hours listening to music weekly, up from 20.1 hours in 2022 – or the equivalent of an extra 13 three-minute songs per week.  

The London-based organization found that 73% of the 43,000-plus music fans it surveyed listen to their favorite artists through subscription or ad-supported audio streaming service such as Spotify, Apple Music or Amazon Music, down slightly from last year’s figure of 74% (IFPI says that the small decrease is down to a change in accounting methodology, rather than a drop in real terms). The proportion of paying subscribers rises from 46% in 2022 to 48% this year.   

Audio subscription services are the most used format, accounting for around a third (32%) of music fans’ weekly listening time, closely followed by video streaming via platforms like YouTube or TikTok, which make up 31% of consumption. 

On average, people now use more than seven different methods to engage with music, reports IFPI, with other popular formats including radio listening (17%), purchased music (9%) and attending live concerts (4%). 

In line with previous years, the adoption of subscription streaming services is highest among younger listeners, with 60% of 16–24-year-olds and 62% of 25-34-year-olds surveyed saying they use subscription music platforms. Usage drops to 28% in the 55-64-year-old age bracket, although consumption is up year-on-year across all age demographics. 

Among 16-24-year-olds, short form video platforms such as TikTok are listed as the most popular way that they engage with music on a daily basis, followed by audio subscription streaming services and then video streaming formats like YouTube. 

The top five countries where people spent the most time listening to music through a subscription streaming service were Sweden (61% of people surveyed), Mexico (57%), Germany (55%), the U.S. (53%) and New Zealand (52%), with the United Kingdom dropping out of the top five.

Overall, IFPI reports a 7% year-on-year rise in time spent listening to music on paid streaming services – a slower rate of growth than the 10% rise in listening time in 2022. 

Artificial Intelligence and the Persistent Piracy Problem

For the first time, IFPI’s research team asked music fans for their views on how they think artificial intelligence will impact on the industry. Nearly eight in ten (79%) said that human creativity is essential to the creation of music and 74% of respondents said that AI should not be used to clone or impersonate music artists without authorization. 

The vast majority of people surveyed supported the need for AI systems and developers to be transparent and clearly identify any training data they have used to create new music works, which is one of the key provisions of the recently agreed EU AI Act. 

The IFPI report was compiled by surveying internet users aged 16-64 between August and October across 26 countries, including the United States, Japan, United Kingdom, Germany, France, China, Australia, Brazil, Canada, Mexico, Indonesia and Saudi Arabia. 

Collectively, these markets accounted for more than 91% of global recorded music revenues in 2022, according to this year’s IFPI Global Music Report. IFPI says the report is the largest music survey of its kind ever conducted. 

In terms of genres, pop remains the most popular type of music globally, followed by rock, hip-hop/rap, dance/electronic and Latin. On average, music fans said that they listened to more than eight different genres of music with local-language genres such as K-pop in South Korea or Amapiano in South Africa increasingly popular in domestic markets. 

Writing in the study’s foreword, IFPI chief executive Frances Moore says its findings demonstrate how the music industry has evolved to give “artists more opportunities than ever to find audiences,” who are in turn “discovering and engaging with more music in an increasing number of ways.”  

Nevertheless, music piracy remains an ongoing issue that has “a severe and direct impact on royalties,” warns Moore. Of those surveyed, 29% of respondents said that use unlicensed or illegal methods to listen to or obtain music, down slightly from the previous year.  

Stream-ripping sites remain the most popular way for consumers to access copyright-infringing music, IFPI found, with 41% of 16-24-year-olds confessing to using them. One in five people (20%) said they had used an unlicensed mobile app to illegally download music.

The listening study also contains separate reports looking at music consumption in China, India, Indonesia, Nigeria, the Philippines, Saudi Arabia, UAE and Vietnam. 

In China, which last year overtook France as the fifth-biggest music market worldwide with revenues of $1.2 billion, 96% of people surveyed said they now used licensed music streaming services with the total number of hours spent listening to music each week increasing to just under 30 hours among respondents. 

Despite the rapid growth in streaming in China, 75% of people surveyed said that they still used unlicensed or illegal ways to access music, demonstrating that piracy remains a serious issue in the world’s most populous country. 

Responding to the report’s findings, Moore said that tackling all forms of copyright infringement on a global basis would continue to be a priority for IFPI to “ensure the most secure digital environment possible for music creators and fans alike.” 

Legislators have provisionally agreed to sweeping new laws that will regulate the use of artificial intelligence (AI) in Europe, including controls around the use of copyrighted music.
The deal between policy makers from the European Union Parliament, Council and European Commission on the EU’s Artificial Intelligence Act was reached late on Friday night in Brussels local time following months of negotiations and amid fierce lobbying from the music and tech industries.   

The draft legislation is the world’s first comprehensive set of laws regulating the use of AI and places a number of legal obligations on technology companies and AI developers, including those working in the creative sector and music business.   

The precise technical details of those measures are still being finalized by EU policy makers, but earlier versions of the bill decreed that companies using generative or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 would be required to provide summaries of any copyrighted works, including music, that they use to train their systems. 

The AI Act will also force developers to clearly identify content that is created by AI, as opposed to human works before they are placed in the market. In addition, tech companies will have to ensure that their systems are designed in such a way that prevents them from generating illegal content. 

Large tech companies who break the rules – which govern all applications and uses of AI inside the 27 member block of EU countries — will face fines of up to €35 million or 7% of global annual turnover. Start-up businesses or smaller tech operations will receive proportionate financial punishments, said the European Commission.   

Governance will be carried out by national authorities, while a new European AI Office will be created to supervise the enforcement of the new rules on general purpose AI models. 

President of the European Commission Ursula von der Leyen called the agreement “a historic moment” that “will make a substantial contribution to the development of global rules and principles for human-centric AI.” 

Responding to the announcement, Tobias Holzmüller, CEO of German collecting society GEMA, said the deal reached by the European government was a welcome “step in the right direction” but cautioned that its rules and provisions “need to be sharpened further on a technical level.”  

“The outcome must be a clearly formulated transparency regime that obliges AI providers to submit detailed evidence on the contents they used to train their systems,” said Holzmüller.  

Representatives of the technology industry, which had lobbied to weaken the AI Act’s transparency provisions, criticized the deal and warned that it was likely to put European AI developers at a competitive disadvantage.  

Daniel Friedlaender, Senior Vice President of the Computer and Communications Industry Association (CCIA), which counts Alphabet, Apple, Amazon and Meta among its members, said in a statement that “crucial details” of the AI act are still missing “with potentially disastrous consequences for the European economy.”  

“The final AI Act lacks the vision and ambition that European tech startups and businesses are displaying right now,” said CCIA Europe’s Policy Manager, Boniface de Champris. He warned that, if passed, the legislation might “end up chasing away the European champions that the EU so desperately wants to empower.” 

Now that an political agreement has been reached on the AI Act, legislators will spend the coming weeks finalizing the exact technical details of the regulation and translating its terms for the 27 EU member countries.  

The final text then needs to be approved by the European Council and Parliament, with a decisive vote not excepted to take place until early next year, possibly as late as March. If passed, the act will be applicable two years after its entry into force, except for some specific provisions: bans will apply after six months while the rules on generative AI models will begin after 12 months. 

In a statement, international recorded music trade organization IFPI said the first-of-its-kind legislation provides “a constructive and encouraging framework” for regulation of the nascent technology.   

“AI offers creators both opportunities and risks,” said an IFPI spokesperson, “and we believe there is a path to a mutually successful outcome for both the creative and technology communities.”

The American Society of Composers, Authors and Publishers (ASCAP) argued that AI companies need to license material from copyright owners to train their models and that “a new federal right of publicity… is necessary to address the unprecedented scale on which AI tools facilitate the improper use of a creator’s image, likeness, and voice” in a document filed to the Copyright Office on Wednesday (Dec. 6). 
The Copyright Office announced that it was studying “the copyright issues raised by generative artificial intelligence” in August and solicited written comments from relevant players in the space. Initial written comments had to be submitted by October 30, while reply comments — which give organizations like ASCAP the chance to push back against assertions made by AI companies like Anthropic and Open AI — were due December 6.

Generative AI models require training: They ingest large amounts of data to identify patterns. “AI training is a computational process of deconstructing existing works for the purpose of modeling mathematically how [they] work,” Google wrote in its reply comments for the Copyright Office. “By taking existing works apart, the algorithm develops a capacity to infer how new ones should be put together” — hence the “generative” part of this. 

ASCAP represents songwriters, composers, and music publishers, and its chief concern is that AI companies will be allowed to train models on its members’ works without coming to some sort of licensing arrangement ahead of time. “Numerous comments from AI industry members raise doubts about the technical or economic feasibility of licensing as a model for the authorized use of protected content,” ASCAP writes. “But armchair speculations about the efficiency of licensing do not justify a rampant disregard for creators’ rights.”

ASCAP adds that “numerous large-scale AI tools have already been developed exclusively on the basis of fully licensed or otherwise legally obtained materials” — pointing to Boomy, Stable Audio, Generative AI by Getty Images, and Adobe Firefly — “demonstrating that the development of generative AI technologies need not come at the expense of creators’ rights.”

ASCAP also calls for the implementation of a new federal right-of-publicity law, worried that voice-cloning technology, for example, can threaten artists’ livelihood. “Generative AI technology introduces unprecedented possibilities for the unauthorized use of a creator’s image, likeness, and voice,” ASCAP argues. “The existing patchwork of state laws were not written with this technology in view, and do not adequately protect creators.”

“Without allowing the artists and creators to control their voice and likeness,” ASCAP continues, “this technology will create both consumer confusion and serious financial harm to the original music creators.”

LONDON — Representatives of the creative industries are urging legislators not to water down forthcoming regulations governing the use of artificial intelligence, including laws around the use of copyrighted music, amid fierce lobbying from big tech companies.     
On Wednesday (Dec. 6), policy makers from the European Union Parliament, Council and European Commission will meet in Brussels to negotiate the final text of the EU’s Artificial Intelligence Act – the world’s first comprehensive set of laws regulating the use of AI.  

The current version of the AI Act, which was provisionally approved by Members of European Parliament (MEPs) in a vote in June, contains several measures that will help determine what tech companies can and cannot do with copyright protected music works. Among them is the legal requirement that companies using generative AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 (classified by the EU as “general purpose AI systems”) provide summaries of any copyrighted works, including music, that they use to train their systems.

The draft legislation will also force developers to clearly identify content that is created by AI, as opposed to human works. In addition, tech companies will have to ensure that their systems are designed in such a way that prevents them from generating illegal content.

While these transparency provisions have been openly welcomed by music executives, behind the scenes technology companies have been actively lobbying policymakers to try and weaken the regulations, arguing that such obligations could put European AI developers at a competitive advantage.  

“We believe this additional legal complexity is out of place in the AI Act, which is primarily focused on health, safety, and fundamental rights,” said a coalition of tech organizations and trade groups, including the Computer and Communications Industry Association, which counts Alphabet, Apple, Amazon and Meta among its members, in a joint statement dated Nov. 27.

In the statement, the tech representatives said they were concerned “about the direction of the current proposals to regulate” generative AI systems and said the EU’s proposals “do not take into account the complexity of the AI value chain.”   

European lawmakers are also in disagreement over how to govern the nascent technology with EU member states France, Germany and Italy understood to be in favor of light touch regulation for developers of generative AI, according to sources close to the negotiations. 

In response, music executives are making a final pitch to legislators to ensure that AI companies respect copyright laws and strengthen existing protections against the unlawful use of music in training AI systems.  

Helen Smith, the executive chair of IMPALA. /

Lea Fery

Helen Smith, executive chair of European independent labels group IMPALA, tells Billboard that the inclusion of “meaningful transparency and record keeping obligations” in the final legislation is a “must for creators and rightsholders” if they are to be able to effectively engage in licensing negotiations.

In a letter sent to EU ambassadors last week, Björn Ulvaeus, founder member of ABBA and president of CISAC, the international trade organization for copyright collecting societies, warned policymakers that “without the right provisions requiring transparency, the rights of the creator to authorise and get paid for use of their works will be undermined and impossible to implement.”

The European Composer and Songwriter Alliance (ECSA), International Federation of Musicians (FIM) and International Artist Organisation (IAO) are also calling for guarantees that the rights of their members are respected.

If legislators fail to reach a compromise agreement at Wednesday’s fifth and planned-to-be-final negotiating session on the AI Act, there are a number of possible outcomes, including further ‘trologue’ talks the following week. If a deal doesn’t happen this month, however, there is the very real risk that the AI Act won’t be passed before the European parliamentary elections take place in June.

If that happens, a new parliament could theoretically scrap the bill altogether, although executives closely monitoring events in Brussels, the de facto capital of the European Union, say that is unlikely to happen and that there is strong political will from all sides to find a resolution before the end of the year when the current Spain-led presidency of the EU Council ends.

Because the AI Act is a regulation and not a directive — such as the equally divisive and just-as-fiercely-lobbied 2019 EU Copyright Directive — it would pass directly into law in all 27 EU member states, although only once it has been fully approved by the different branches of the European government via a final vote and officially entered into force (the exact timeframe of which could be determined in negotiations, but could take up to three years). 

In that instance, the act’s regulations will apply to any company that operates in the European Union, regardless of where they are based. Just as significant, if passed, the act will provide a world-first legislative model to other governments and international jurisdictions looking to draft their own laws on the use of artificial intelligence.

“It is important to get this right,” says IMPALA’s Smith, “and seize the opportunity to set a proper framework around these [generative AI] models.”

In April, Grimes encouraged artists to make music using her voice — as replicated by artificial intelligence-powered technology. Even as she embraced a high-tech future, however, she noted that there were some old-fashioned legal limitations. “I don’t own the rights to the vocals from my old albums,” she wrote on X. “If you make remixes, they may get taken down.”

Artificial intelligence has dominated the hype cycle in 2023. But most signed artists who are enthusiastic about testing out this technology will have to move cautiously, wary of the fact that preexisting contracts may assert some level of control over how they can use their voice. “In general, in a major label deal, they’re the exclusive label for name, likeness and voice under the term,” says one veteran manager who spoke on the condition of anonymity. “Labels might be mad if artists went around them and did a deal themselves. They might go, ‘Hey, wait a minute, we have the rights to this.’”

On the flip side, labels probably can’t (or won’t) move unilaterally either. “In our agreements, in a handful of territories, we’ve been getting exclusive name, image, likeness and voice rights in connection with recordings for years,” says one major label source. That said, “as a practical matter, we wouldn’t license an artist’s voice for a voice model or for any project without the artists being on board with it. It would be bad business for us.”

For the moment, both sides are inching forward, trying to figure out how to “interpret new technology with arcane laws,” as Arron Saxe, who manages several artists’ estates, puts it. “It’s an odd time because the government hasn’t stepped in and put down real guidelines around AI,” adds Dan Smith, general manager of the dance label Armada Music. 

That means guidelines must be drawn via pre-existing contracts, most of which were not written with AI in mind, and often vary from one artist to the next. Take a recent artist deal sent out by one major label and reviewed by Billboard: Under the terms, the label has the “exclusive right to record Artist Performances” with “performance” broadly defined to include “singing, speaking… or such performance itself, as the context requires.” The word “recording” is similarly roomy: “any recording of sound…by any method and on any substance or material, whether now or hereafter known.” 

Someone in this deal probably couldn’t easily go rogue and build a voice-cloning model on newly recorded material without permission. Even to participate in YouTube’s recently announced AI voice generation experiment, some artists needed to get permission in form of a “label waiver,” according to Audrey Benoualid, a partner at Myman Greenspan Fox Rosenberg Mobasser Younger & Light. (In an interview about YouTube’s new feature, Demis Hassabis, CEO of Google Deepmind, said only that it has “been complicated” to negotiate deals with various music rights holders.) Even after an artist’s deal ends, if their recordings remain with a label, they would have to be careful to only train voice-cloning tech with material that isn’t owned exclusively by their former record company. 

It’s not just artists that are interested in AI opportunities, though. Record labels stand to gain from developing licensing deals with AI companies for their entire catalogs, which could in turn bring greater opportunities for artists who want to participate. At the Made on YouTube event in September, Warner Music Group CEO Robert Kyncl said it’s the label’s “job” to make sure that artists who lean into AI “benefit.” At the same time, he added, “It’s also our job together to make sure that artists who don’t want to lean in are protected.” 

In terms of protections, major label deals typically come with a list of approval rights: Artists will ask that they get the chance to sign off on any sample of their recordings or the use of one of their tracks in a movie trailer. “We believe that any AI function is just another use of the talents’ intellectual property that would take some approval by the creator,” explains Leron Rogers, a partner at Fox Rothschild.

In many states, artists also have protection under the “right of publicity,” which says that people have control over the way others can exploit their individual identities. “Under that umbrella is where things like the right to your voice, your face, your likeness are protected and can’t be mimicked because it’s unfair competition,” says Lulu Pantin, founder of Loop Legal. “But because those laws are not federal, they’re inconsistent, and every state’s laws are slightly different” — not all states specifically call out voices, for example —  “[so] there’s concern that that’s not going to provide robust protection given how ubiquitous AI has become already.” (A lack of federal law also limits the government’s ability to push for enforcement abroad.) 

To that end, a bipartisan group of senators recently introduced a draft proposal of the NO FAKES act (“Nurture Originals, Foster Art, and Keep Entertainment Safe”), which would enshrine a federal right for artists, actors and others to take legal action against anyone who creates unauthorized “digital replicas” of their image, voice, or likeness. “Artists would now gain leverage they didn’t have before,” says Mike Pelczynski, who serves on the advisory board of the company voice-swap.ai. 

While the entertainment industry tracks NO FAKES’ progress, Smith from Armada believes “we will probably start to see more artist agreements that are addressing the use of your voice.” Sure enough, Benoualid says that in new label deals for her clients, she now asks for approval over any use of an artist’s name, likeness, or voice in connection with AI technology. “Express written approval should be required prior to a company reproducing vocals, recordings, or compositions for the purpose of training AI platforms,” agrees Matthew Gorman, a lawyer at Cox & Palmer. 

Pantin has been keeping an eye on the way other creative fields are handling this fast-evolving tech to see if there are lessons that can be imported into music. “One thing that I’ve been trying to do and I’ve had success in some instances with is asking the rights holders — the publishers, the labels — for consent rights from the individual artists or songwriter before their work is used to train generative AI,” she says. “On the book publishing side, the Authors Guild has put forth language they recommended are included in all publishing agreements, and so I’m drawing from that and extending that to songwriting.”

All these discussions are new, and the long-term impact of AI-driven technology on the creative fields remains unclear. Daouda Leonard, who manages Grimes, is adamant that in the music industry’s near future, “the licensing of voice is going to become a valuable asset.” Other are less sure — “nobody really knows how important this will be,” the major label source says. 

Perhaps Grimes put it best on X: “We expect a certain amount of chaos.”

Dennis Kooker, president of global digital business at Sony Music Entertainment, represented the music business at Sen. Chuck Schumer’s (D-NY) seventh artificial intelligence insight forum in Washington, D.C. on Wednesday (Nov. 29). In his statement, Kooker implored the government to act on new legislation to protect copyright holders to ensure the development of “responsible and ethical generative AI.”

The executive revealed that Sony has already sent “close to 10,000 takedowns to a variety of platforms hosting unauthorized deepfakes that SME artists asked us to take down.” He says these platforms, including streamers and social media sites, are “quick to point to the loopholes in the law as an excuse to drag their feet or to not take the deepfakes down when requested.”

Presently, there is no federal law that explicitly requires platforms to takedown songs that impersonate an artists’ voice. Platforms are only obligated to do this when a copyright (a sound recording or a musical work) is infringed, as stipulated by the Digital Millennium Copyright Act (DMCA). Interest in using AI to clone the voices of famous artists has grown rapidly since a song with AI impersonations of Drake and The Weekend went viral earlier this year. The track, called “Heart on My Sleeve” has become one of the most popular use-cases of music-related AI.

A celebrity’s voice and likeness can be protected by “right of publicity” laws that safeguard it from unauthorized exploitation, but this right is limited. Its protections vary state-to-state and are even more limited post-mortem. In May, Billboard reported that the major labels — Sony, Universal Music Group and Warner Music Group — had been in talks with Spotify, Apple Music and Amazon Music to create a voluntary system for takedowns of right of publicity violations, much like the one laid out by the DMCA, according to sources at all three majors. It is unclear from Kooker’s remarks if the platforms that are dragging their feet on voice clone removals include the three streaming services that previously took part in these discussions.

In his statement, Kooker asked the Senate forum to create a federal right of publicity to create a stronger and more uniform protection for artists. “Creators and consumers need a clear unified right that sets a floor across all fifty states,” he said. This echoes what UMG general counsel/ executive vp of business and legal affairs Jeffery Harleston asked the Senate during a July AI hearing.

Kooker expressed his “sincere gratitude” to Sens. Chris Coons, Marsha Blackburn, Amy Klobuchar and Thom Tillis for releasing a draft bill called the No FAKES (“Nurture Originals, Foster Art, and Keep Entertainment Safe”) Act in October, which would create a federal property right for one’s voice or likeness and protect against unauthorized AI impersonations. At its announcement, the No FAKES Act drew resounding praise from music business organizations, including the RIAA and the American Association of Independent Music.

Kooker also stated that in this early stage many available generative AI products today are “not expanding the business model or enhancing human creativity.” He pointed to a “deluge of 100,000 new recordings delivered to [digital service providers] every day” and said that some of these songs are “generated using generative AI content creation tools.” He added, “These works flood the current music ecosystem and compete directly with human artists…. They reduce and diminish the earnings of human artists.”

“We have every reason to believe that various elements of AI will become routine in the creative process… [as well as] other aspects of our business” like marketing and royalty accounting,” Kooker continued. He said Sony Music has already started “active conversations” with “roughly 200” different AI companies about potential partnerships with Sony Music.

Still, he stressed five key issues remain that need to be addressed to “assure a thriving marketplace for AI and music.” Read his five points, as written in his prepared statement, below:

Assure Consent, Compensation, and Credit. New products and businesses built with music must be developed with the consent of the owner and appropriate compensation and credit. It is essential to understand why the training of AI models is being done, what products will be developed as a result, and what the business model is that will monetize the use of the artist’s work. Congress and the agencies should assure that creators’ rights are recognized and respected.

Confirm That Copying Music to Train AI Models is Not Fair Use. Even worse are those that argue that copyrighted content should automatically be considered fair use so that protected works are never compensated for usage and creators have no say in the products or business models that are developed around them and their work. Congress should assure and agencies should presume that reproducing music to train AI models, in itself, is not a fair use.

Prevent the Cloning of Artists’ Voices and Likenesses Without Express Permission. We cannot allow an artist’s voice or likeness to be cloned for use without the express permission of the artist. This is a very personal decision for the artist. Congress should pass into law effective federal protections for name, image, and likeness.

Incentivize Accurate Record-Keeping. Correct attribution will be a critical element to artists being paid fairly and correctly for new works that are created. In addition, rights can only be enforced around the training of AI when there are accurate records about what is being copied. Otherwise, the inability to enforce rights in the AI marketplace equates to a lack of rights at all, producing a dangerous imbalance that prevents a thriving ecosystem. This requires strong and accurate record keeping by the generative AI platforms, a requirement that urgently needs legislative support to ensure incentives are in place so that it happens consistently and correctly.

Assure Transparency for Consumers and Artists. Transparency is necessary to clearly distinguish human-created works from AI-created works. The public should know, when they are listening to music, whether that music was created by a human being or a machine.

Most conversations around AI in music are focused on music creation, protecting artists and rightsholders, and differentiating human-made music from machine-made works. And there is still discourse to be had as AI has some hidden superpowers waiting to be explored. One use for the technology that has immense potential to positively impact artists is music marketing.

As generative and complementary AI is becoming a larger part of creative works in music, marketing will play a larger role than ever before. Music marketing isn’t just about reaching new and existing fans and promoting upcoming singles. Today, music marketing must establish an artist’s ownership of their work and ensure that the human creatives involved are known, recognized, and appreciated. We’re about to see the golden age of automation for artists who want to make these connections and gain this appreciation.

While marketing is a prerequisite to a creator’s success, it takes a lot of time, energy, and resources. Creating engaging content takes time. According to Linktree’s 2023 Creator Report, 48% of creators who make $100-500k per year spend more than 10 hours on content creation every week. On top of that, three out of four creators want to diversify what they create but feel pressure to keep making what is rewarded by the algorithm. Rather than fighting the impossible battle of constantly evolving and cranking out more content to match what the algorithm is boosting this week, creatives can have a much greater impact by focusing on their brand and making high-quality content for their audience.

For indie artists without support from labels and dedicated promotion teams, the constant pressure to push their new single on TikTok, post on Instagram, and engage with fans while finding the time to make new music is overwhelming. The pressure is only building, thanks to changes in streaming payouts. Indie artists need to reach escape velocity faster.

Megh Vakharia

AI-powered music marketing can lighten that lift–generating campaign templates and delivering to artists the data they need to reach their intended audience. AI can take the data that artists and creators generate and put it to work in a meaningful way, automatically extracting insights from the information and analytics to build marketing campaigns and map out tactics that get results. 

AI-driven campaigns can give creators back the time they need to do what they do best: create. While artificial intelligence saves artists time and generates actionable solutions for music promotion, it is still highly dependent on the artist’s input and human touch. Just as a flight captain has to set route information and parameters before switching on autopilot, an artist enters their content, ideas, intended audience, and hopeful outcome of the marketing campaign. Then, using this information, the AI-powered marketing platform can provide all of the data and suggestions necessary to produce the targeted results.  

Rather than taking over the creative process, AI should be used to assist and empower artists to be more creative. It can help put the joy back into what can be a truly fun process — finding, reaching, and engaging with fans. 

A large portion of artists who have tapped into AI marketing have never spent money on marketing before, but with the help of these emerging tools, planning and executing effective campaigns is more approachable and intuitive. As the music industry learns more about artificial intelligence and debates its ethical implications in music creation, equal thought must be given to the opportunities that it unlocks for artists to grow their fanbases, fuel more sustainable careers, and promote their human-made work.

Megh Vakharia is the co-founder and CEO of SymphonyOS, the AI-powered marketing platform empowering creatives to build successful marketing campaigns that generate fan growth using its suite of smart, automated marketing tools.

Earlier this month, 760 stations owned by iHeartMedia simultaneously threw their weight behind a new single: The Beatles’ “Now and Then.” This was surprising, because the group broke up in 1970 and two of the members are dead. “Now and Then” began decades ago as a home recording by John Lennon; more recently, AI-powered audio technology allowed for the separation of the demo’s audio components — isolating the voice and the piano — which in turn enabled the living Beatles to construct a whole track around them and roll it out to great fanfare. 

“For three days, if you were a follower of popular culture, all you heard about was The Beatles,” says Arron Saxe, who represents several estates, including Otis Redding’s and Bill Withers’s. “And that’s great for the business of the estate of John Lennon and the estate of George Harrison and the current status of the two living legends.”

For many people, 2023 has been the year that artificial intelligence technology left the realm of science fiction and crashed rudely into daily life. And while AI-powered tools have the potential to impact wide swathes of the music industry, they are especially intriguing for those who manage estates or the catalogs of dead artists. 

That’s because there are inherent constraints involved with this work: No one is around to make new stuff. But as AI models get better, they have the capacity to knit old materials together into something that can credibly pass as new — a reproduction of a star’s voice, for example. “As AI develops, it may impact the value of an estate, depending on what assets are already in the estate and can be worked with,” says Natalia Nataskin, chief content officer for Primary Wave, who estimates that she and her team probably spend around 25% of their time per week mulling AI (time she says they used to spend contemplating possibilities for NFTs).

And a crucial part of an estate manager’s job, Saxe notes, is “looking for opportunities to earn revenue.” “Especially with my clients who aren’t here,” he adds, “you’re trying to figure out, how do you keep it going forward?”

The answer, according to half a dozen executives who work with estates or catalogs of dead artists or songwriters, is “very carefully.” “We say no to 99 percent of opportunities,” Saxe says. 

“You have this legacy that is very valuable, and once you start screwing with it, you open yourself up to causing some real damage,” adds Jeff Jampol, who handles the estates of The Doors, Janis Joplin and more. “Every time you’re going to do something, you have to be really protective. It’s hard to be on the bleeding edge.”

To work through these complicated issues, WME went so far as to establish an AI Task Force where agents from every division educate themselves on different platforms and tools to “get a sense for what is out there and where there are advantages to bring to our clients,” says Chris Jacquemin, the company’s head of digital strategy. The task force also works with WME’s legal department to gain “some clarity around the types of protections we need to be thinking about,” he continues,  as well as with the agency’s legislative division in Washington, D.C. 

At the moment, Jampol sees two potentially intriguing uses of AI in his work. “It would be very interesting to have, for instance, Jim Morrison narrate his own documentary,” he explains. He could also imagine using an AI voice model to read Morrison’s unrecorded poetry. (The Doors singer did record some poems during his lifetime, suggesting he was comfortable with this activity.) 

On Nov. 15, Warner Music Group announced a potentially similar initiative, partnering with the French great Edith Piaf’s estate to create a voice model — based on the singer’s old interviews — which will narrate the animated film Edith. The executors of Piaf’s estate, Catherine Glavas and Christie Laume, said in a statement that “it’s been a special and touching experience to be able to hear Edith’s voice once again — the technology has made it feel like we were back in the room with her.”

The use of AI tech to recreate a star’s speaking voice is “easier” than attempting to put together an AI model that will replicate a star singing, according to Nataskin. “We can train a model on only the assets that we own — on the speaking voice from film clips, for example,” she explains. 

In contrast, to train an AI model to sing like a star of old, the model needs to ingest a number of the artist’s recordings. That requires the consent of other rights holders — the owners of those recordings, which may or may not be the estate, as well as anyone involved in their composition. Many who spoke to Billboard for this story said they were leery of AI making new songs in the name of bygone legends. “To take a new creation and say that it came from someone who isn’t around to approve it, that seems to me like quite a stretch,” says Mary Megan Peer, CEO of the publisher peermusic. 

Outside the United States, however, the appetite for this kind of experimentation may differ. Roughly a year ago, the Chinese company Tencent Music Entertainment told analysts that it used AI-powered technology to create new vocal tracks from dead singers, one of which went on to earn more than 100 million streams.

For now, at least, Nataskin characterized Primary Wave as focused on “enhancing” with AI tech, “rather than creating something from scratch.” And after Paul McCartney initially mentioned that artificial intelligence played a role in “Now and Then,” he quickly clarified on X that “nothing has been artificially or synthetically created,” suggesting there is still some stigma around the use of AI to generate new vocals from dead icons. The tech just “cleaned up some existing recordings,” McCartney noted.

This kind of AI use for “enhancing” and “cleaning up,” tweaking and adjusting has already been happening regularly for several years. “For all of the industry freakout about AI, there’s actually all these ways that it’s already operating everyday on behalf of artists or labels that isn’t controversial,” says Jessica Powell, co-founder and CEO of Audioshake, a company that uses AI-powered technology for stem separation. “It can be pretty transformational to be able to open up back catalog for new uses.”

The publishing company peermusic used AI-powered stem separation to create instrumentals for two tracks in its catalog — Gaby Moreno’s “Fronteras” and Rafael Solano’s “Por Amor” — which could then be placed in ads for Oreo and Don Julio, respectively. Much like the Beatles, Łukasz Wojciechowski, co-founder of Astigmatic Records, used stem separation to isolate, and then remove distortion from, the trumpet part in a previously unreleased recording he found of jazz musician Tomasz Stanko. After the clean up, the music could be released for the first time. “I’m seeing a lot of instances with older music where the quality is really poor, and you can restore it,” Wojciechowski says.

Powell acknowledges that these uses are “not a wild proposition like, ‘create a new voice for artist X!’” Those have been few and far between — at least the authorized ones. (Hip-hop fans have been using AI-powered technology to turn snippets of rap leaks from artists like Juice WRLD, who died in 2019, into “finished” songs.) For now, Saxe believes “there hasn’t been that thing where people can look at it and go, ‘They nailed that use of it.’ We haven’t had that breakout commercial popular culture moment.”

It’s still early, though. “Where we go with things like Peter Tosh or Waylon Jennings or Eartha Kitt, we haven’t decided yet,” says Phil Sandhaus, head of WME Legends division. “Do we want to use voice cloning technologies out there to create new works and have Eartha Kitt in her unique voice sing a brand new song she’s never sung before? Who knows? Every family, every estate is different.”

Additional reporting by Melinda Newman

Listeners remain wary of artificial intelligence, according to Engaging with Music 2023, a forthcoming report from the International Federation of the Phonographic Industry (IFPI) that seems aimed in particular at government regulators.
The IFPI surveyed 43,000 people across 26 countries, coming to the conclusion that 76% of respondents “feel that an artist’s music or vocals should not be used or ingested by AI without permission,” and 74% believe “AI should not be used to clone or impersonate artists without authorisation.” 

The results are not surprising. Most listeners probably weren’t thinking much, if at all, about AI and its potential impacts on music before 2023. (Some still aren’t thinking about it: 89% of those surveyed said they were “aware of AI,” leaving 11% who have somehow managed to avoid a massive amount of press coverage this year.) New technologies are often treated with caution outside the tech industry. 

It’s also easy for survey respondents to support statements about getting authorization for something before doing it — that generally seems like the right thing to do. But historically, artists haven’t always been interested in preemptively obtaining permission. 

Take the act of sampling another song to create a new composition. Many listeners would presumably agree that artists should go through the process of clearing a sample before using it. In reality, however, many artists sample first and clear later, sometimes only if they are forced to.

In a statement, Frances Moore, IFPI’s CEO, said that the organization’s survey serves as a “timely reminder for policymakers as they consider how to implement standards for responsible and safe AI.”

U.S. policymakers have been moving slowly to develop potential guidelines around AI. In October, a bipartisan group of senators released a draft of the NO FAKES Act, which aims to prevent the creation of “digital replicas” of an artist’s image, voice, or visual likeness without permission.

“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Senator Chris Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”

When Dierks Bentley’s band is looking for something to keep it occupied during long bus rides, the group has, at times, turned to artificial intelligence apps, asking them to create album reviews or cover art for the group’s alter ego, The Hot Country Knights.
“So far,” guitarist Charlie Worsham says, “AI does not completely understand The Hot Country Knights.”

By the same token, Music Row doesn’t completely understand AI, but the developing technology is here, inspiring tech heads and early adaptors to experiment with it, using it to get a feel, for example, for how Bentley’s voice might fit a new song or to kick-start a verse that has the writer stumped. But it has also inspired a palpable amount of fear among artists anticipating their voices will be misused and among musicians who feel they’ll be completely replaced.

“As a songwriter, I see the benefit that you don’t have to shell out a ton of money for a demo singer,” one attendee said during the Q&A section of an ASCAP panel about AI on Nov. 7. “But also, as a demo singer, I’m like, ‘Oh, shit, I’m out of a job.’”

That particular panel, moderated by songwriter-producer Chris DeStefano (“At the End of a Bar,” “That’s My Kind of Night”), was one of three AI presentations that ASCAP hosted at Nashville’s Twelve Thirty Club the morning after the ASCAP Country Music Awards, hoping to educate Music City about the burgeoning technology. The event addressed the creative possibilities ahead, the evolving legal discussion around AI and the ethical questions that it raises. (ASCAP has endorsed six principles for AI frameworks here).

The best-known examples of AI’s entry into music have revolved around the use of public figures’ voices in novel ways. Hip-hop artist Drake, in one prominent instance, had his voice re-created in a cover of “Bubbly,” originated by Colbie Caillat, who released her first country album, Along the Way, on Sept. 22. 

“Definitely bizarre,” Caillat said during CMA Week activities. “I don’t think it’s good. I think it makes it all too easy.”

But ASCAP panelists outlined numerous ways AI can be employed for positive uses without misappropriating someone’s voice. DeStefano uses AI program Isotope, which learned his mixing tendencies, to elevate his tracks to “another level.” Independent hip-hop artist Curtiss King has used AI to handle tasks outside of his wheelhouse that he can’t afford to outsource, such as graphic design or developing video ideas for social media. Singer-songwriter Anna Vaus instructed AI to create a 30-day social media campaign for her song “Halloween on Christmas Eve” and has used it to adjust her bio or press releases — “stuff,” she says, “that is not what sets my soul on fire.” It allows her more time, she said, for “sitting in my room and sharing my human experiences.”

All of this forward motion is happening faster in some other genres than it is in country, and the abuses — the unauthorized use of Drake’s voice or Tom Cruise’s image — have entertainment lawyers and the Copyright Office playing catch-up. Those examples test the application of the fair use doctrine in copyright law, which allows creators to play with existing copyrights. But as Sheppard Mullin partner Dan Schnapp pointed out during the ASCAP legal panel, fair use requires the new piece to be a transformative product that does not damage the market for the original work. When Drake’s voice is being applied without his consent to a song he has never recorded and he is not receiving a royalty, that arguably affects his marketability.

The Copyright Office has declined to offer copyright protection for AI creations, though works that are formed through a combination of human and artificial efforts complicate the rule. U.S. Copyright Office deputy general counsel Emily Chapuis pointed to a comic book composed by a human author who engaged AI for the drawings. Copyright was granted to the text, but not the illustrations.

The legal community is also sorting through rights to privacy and so-called “moral rights,” the originator’s ability to control how a copyright is used.

“You can’t wait for the law to catch up to the tech,” Schnapp said during the legal panel. “It never has and never will. And now, this is the most disruptive technology that’s hit the creative industry, generally, in our lifetime. And it’s growing exponentially.”

Which has some creators uneasy. Carolyn Dawn Johnson asked from the audience if composers should stop using their phones during writing appointments because ads can track typed and spoken activity, thus opening the possibility that AI begins to draw on content that has never been included in copyrighted material. The question was not fully answered.

But elsewhere, Nashville musicians are beginning to use AI in multiple ways. Restless Road has had AI apply harmonies to songwriter demos to see if a song might fit its sound. Elvie Shane, toying with a chatbot, developed an idea that he turned into a song about the meth epidemic, “Appalachian Alchemy.” Chase Matthew’s producer put a version of his voice on a song to convince him to record it. Better Than Ezra’s Kevin Griffin, who co-wrote Sugarland’s “Stuck Like Glue,” has asked AI to suggest second verses on songs he was writing — the verses are usually pedestrian, but he has found “one nugget” that helped finish a piece. 

The skeptics have legitimate points, but skeptics also protested electronic instruments, drum machines, CDs, file sharing and programmed tracks. The industry has inevitably adapted to those technologies. And while AI is scary, early adopters seem to think it’s making them more productive and more creative.

“It’s always one step behind,” noted King. “It can make predictions based upon the habits that I’ve had, but there’s so many interactions that I have because I’m a creative and I get creative about where I’m going next … If anything, AI has given me like a kick in the butt to be more creative than I’ve ever been before.”

Songwriter Kevin Kadish (“Whiskey Glasses,” “Soul”) put the negatives of AI into a bigger-picture perspective.

“I’m more worried about it for like people’s safety and all the scams that happen on the phone,” he said on the ASCAP red carpet. “Music is the least of our worries with AI.”

Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.