artificial intelligence
Page: 9
Members of the American Federation of Musicians voted to ratify the union’s agreement with the Alliance of Motion Picture and Television Producers. The agreement, which covers basic theatrical motion picture and basic television motion picture contracts, gives musicians streaming residuals for the first time, as well as protections against artificial intelligence, according to AFM. In addition to […]
Billie Eilish, Pearl Jam, Nicki Minaj, Katy Perry, Elvis Costello, Darius Rucker, Jason Isbell, Luis Fonsi, Miranda Lambert and the estates of Bob Marley and Frank Sinatra are among more than 200 signees to an open letter targeting tech companies, digital service providers and AI developers over irresponsible artificial intelligence practices, calling such work an “assault on human creativity” that “must be stopped.”
The letter, issued by the non-profit Artist Rights Alliance, calls on such organizations to “cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists,” stressing that any use of AI be done responsibly. “Make no mistake: we believe that, when used responsibly, AI has enormous potential to advance human creativity and in a manner that enables the development and growth of new and exciting experiences for music fans everywhere. Unfortunately, some platforms and developers are employing AI to sabotage creativity and undermine artists, songwriters, musicians and rights holders.”
Trending on Billboard
Artists, songwriters and producers from all genres, several generations and multiple continents added their names to the letter, from younger artists like Ayra Starr to legends like Smokey Robinson and organizations like HYBE. In particular, the signatories point to the use of AI models trained on unlicensed music, which they call “efforts directly aimed at replacing the work of human artists with massive quantities of AI-created ‘sounds’ and ‘images’ that substantially dilute the royalty pools that are paid out to artists. For many working musicians, artists and songwriters who are just trying to make ends meet, this would be catastrophic.”
“Working musicians are already struggling to make ends meet in the streaming world, and now they have the added burden of trying to compete with a deluge of AI-generated noise,” Jen Jacobsen, executive director of the Artist Rights Alliance, said in a statement accompanying the letter. “The unethical use of generative AI to replace human artists will devalue the entire music ecosystem — for artists and fans alike.”
Over the past year or so, many in the music industry have echoed similar calls for the ethical and responsible use of artificial intelligence, which left unchecked has the potential to undermine copyright law and make issues like streaming fraud, soundalikes and intellectual property theft much more rampant, much more quickly. There have been Congressional hearings on the matter, and states like Tennessee have begun introducing and passing legislation hoping to protect creators and intellectual property owners from deception and fraud, broadening laws and addressing ethical use. Universal Music Group has developed a task force to address the issue, and UMPG has cited TikTok’s AI approach as one of the reasons for the standoff between the two companies that is ongoing, while the RIAA, Warner Music Group and others have all weighed in stressing that protecting IP from unlicensed AI overreach is of utmost importance.
“We must protect against the predatory use of AI to steal professional artists’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem,” the letter concludes. “We call on all digital music platforms and music-based services to pledge that they will not develop or deploy AI music-generation technology, content, or tools that undermine or replace the human artistry of songwriters and artists or deny us fair compensation for our work.”
Read the full letter and see the list of signatories here.
Venice announced the beta launch of a new tool called Co-Manager on Tuesday (April 2nd). The “career assistant” for artists incorporates “insights from top artist managers, marketers, streaming analysts, and digital strategists with OpenAI machine learning and your unique streaming data,” according to a release.
“Co-Manager is designed to educate artists on the business and marketing of music, so artists can spend more time focused on their creative vision,” Suzy Ryoo, co-founder and president of Venice Music, said in a statement. Venice, co-founded by Troy Carter, believes its tool can help artists plan advertising campaigns and album roll-outs.
Many of the most consequential questions related to the rapid advancement of artificial intelligence — whether genAI models need to license training data, for example — have yet to be decided.
Trending on Billboard
“Unfortunately, other than right of publicity laws that vary in effectiveness on a state-by-state basis, there is little current protection for an artist regarding the threats posed by artificial intelligence, and, therefore, governmental action is urgently needed,” Russell L. King, director of the King Law Firm, told Billboard earlier this year.
But the government isn’t known for moving quickly. That means, “whatever we think about the state of AI and its legal treatment, it’s important to stay nimble and try to think several steps out because things may change fast,” Spotify general counsel Eve Konstan said recently.
To that end, the heads of the major labels have all discussed the importance of finding AI-powered tools to help their artists.
“We are at the gateway of a new technological era with AI,” Sony Music CEO Rob Stringer said in 2023. “And unsurprisingly, music will be a core component of this process. AI promises to provide us tools so that our artists and writers can create and innovate. It also heralds greater levels of insight through machine learning, as well as potential new licensing channels and avenues for commercial exploitation.”
Similarly, Universal Music Group CEO Lucian Grainge talked about the company goal of “forg[ing] groundbreaking private-sector partnerships with AI technology companies” in a memo to staff in January.
“In addition, our artists have begun working with some of the latest AI technology to develop tools that will enhance and support the creative process and produce music experiences unlike anything that’s been heard before,” Grainge continued. “And to leverage AI technology that would benefit artists, we continue to strike groundbreaking agreements with, among others, Endel and BandLab.”
As the entertainment attorney Tamara Milagros-Butler put it recently, “don’t be afraid to explore AI as a tool, but maintain human connection.”
In November, I quit my job in generative AI to campaign for creators’ right not to have their work used for AI training without permission. I started Fairly Trained, a non-profit that certifies generative AI companies that obtain a license before training models on copyrighted works.
Mostly, I’ve felt good about this decision — but there have been a few times when I’ve questioned it. Like when a big media company, though keen to defend its own rights, told me it couldn’t find a way to stop using unfairly-trained generative AI in other domains. Or whenever demos from the latest models receive unquestioning praise despite how they’re trained. Or, last week, with the publication of a series of articles about AI music company Suno that I think downplay serious questions about the training data it uses.
Suno is an AI music generation company with impressive text-to-song capabilities. I have nothing against Suno, with one exception: Piecing together various clues, it seems likely that its model is trained on copyrighted work without rights holders’ consent.
Trending on Billboard
What are these clues? Suno refuses to reveal its training data sources. In an interview with Rolling Stone, one of its investors disclosed that Suno didn’t have deals with the labels “when the company got started” (there is no indication this has changed), that they invested in the company “with the full knowledge that music labels and publishers could sue,” and that the founders’ lack of open hostility to the music industry “doesn’t mean we’re not going to get sued.” And, though I’ve approached the company through two channels about getting certified as Fairly Trained, they’ve so far not taken me up on the offer, in contrast to the 12 other AI music companies we’ve certified for training their platforms fairly.
There is, of course, a chance that Suno licenses its training data, and I genuinely hope I’m wrong. If they correct the record, I’ll be the first to loudly and regularly trumpet the company’s fair training credentials.
But I’d like to see media coverage of companies like Suno give more weight to the question of what training data is being used. This is an existential issue for creators.
Editor’s note: Suno’s founders did not respond to requests for comment from Billboard about their training practices. Sources confirm that the company does not have licensing agreements in place with some of the most prominent music rightsholders, including the three major label groups and the National Music Publishers’ Association.
Limiting discussion of Suno’s training data to the fact that it “decline[s] to reveal details” and not explicitly stating the possibility that Suno uses copyrighted music without permission means that readers may not be aware of the potential for unfair exploitation of musicians’ work by AI music companies. This should factor into our thoughts about which AI music companies to support.
If Suno is training on copyrighted music without permission, this is likely the technological factor that sets it apart from other AI music products. The Rolling Stone article mentions some of the tough technical problems that Suno is solving — having to do with tokens, the sampling rate of audio and more — but these are problems that other companies have solved. In fact, several competitors have models as capable as Suno’s. The reason you don’t see more models like Suno’s being released to the public is that most AI music companies want to ensure training data is licensed before they release their products.
The context here is important. Some of the biggest generative AI companies in the world are using untold numbers of creators’ work without permission in order to train AI models that compete with those creators. There is, understandably, a big public outcry at this large-scale scraping of copyrighted work from the creative community. This has led to a number of lawsuits, which Rolling Stone mentions.
The fact that generative AI competes with human creators is something AI companies prefer not to talk about. But it’s undeniable. People are already listening to music from companies like Suno in place of Spotify, and generative AI listening will inevitably eat into music industry revenues — and therefore human musicians’ income — if training data isn’t licensed.
Generative AI is a powerful technology that will likely bring a number of benefits. But if we support the exploitation of people’s work for training without permission, we implicitly support the unfair destruction of the creative industries. We must instead support companies that take a fairer approach to training data.
And those companies do exist. There are a number — generally startups — taking a fairer approach, refusing to use copyrighted work without consent. They are licensing, or using public domain data, or commissioning data, or all of the above. In short, they are working hard not to train unethically. At Fairly Trained, we have certified 12 of these companies in AI music. If you want to use AI music and you care about creators’ rights, you have options.
There is a chance Suno has licensed its data. I encourage the company to disclose what it’s training its AI model on. Until we know more, I hope anyone looking to use AI music will opt instead to work with companies that we know take a fair approach to using creators’ work.
To put it simply — and to use some details pulled from Suno’s Rolling Stone interview — it doesn’t matter whether you’re a team of musicians, what you profess to think about IP, or how many pictures of famous composers you have on the walls. If you train on copyrighted work without a license, you’re not on the side of musicians. You’re unfairly exploiting their work to build something that competes with them. You’re taking from them to your gain — and their cost.
Ed Newton-Rex is the CEO of Fairly Trained and a composer. He previously founded Jukedeck, one of the first AI music companies, ran product in Europe for TikTok, and was vp of audio at Stability AI.
Tennessee governor Bill Lee signed the ELVIS Act into law Thursday (Mar. 21), legislation designed to further protect the state’s artists from artificial intelligence deep fakes. The bill, more formally named the Ensuring Likeness Voice and Image Security Act of 2024, replaces the state’s old right of publicity law, which only included explicit protections for one’s “name, photograph, or likeness,” expanding protections to include voice- and AI-specific concerns for the first time.
Gov. Lee signed the bill into law from a local honky tonk, surrounded by superstar supporters like Luke Bryan and Chris Janson. Lee joked that it was “the coolest bill signing ever.”
The ELVIS Act was introduced by Gov. Lee in January along with State Senate Majority Leader Jack Johnson (R-27) and House Majority Leader William Lambert (R-44), and it has since garnered strong support from the state’s artistic class. Talents like Lindsay Ell, Michael W. Smith, Natalie Grant, Matt Maher and Evanescence‘s David Hodges have been vocal in their support for the bill.
Trending on Billboard
It also gained support from the recorded music industry and the Human Artistry Campaign, a global initiative of entertainment organizations that pushes for a responsible approach to AI. The initiative has buy-in from more than 180 organizations worldwide, including the RIAA, NMPA, BMI, ASCAP, Recording Academy and American Association of Independent Music (A2IM).
Right of publicity protections vary state-to-state in the United States, leading to a patchwork of laws that make enforcing one’s ownership over one’s name, likeness and voice more complicated. There is an even greater variation among right of publicity laws postmortem. As AI impersonation concerns have grown more prevalent over the last year, there has been a greater push by the music business to gain a federal right of publicity.
The ELVIS Act replaces the Personal Rights Protection Act of 1984, which was passed, in part, to extend Elvis Presley‘s publicity rights after he passed away. (At the time, Tennessee did not recognize a postmortem right of publicity). Along with explicitly including a person’s voice as a protected right for the first time, the ELVIS Act also broadens which uses of one’s name, image, photograph and voice are barred.
Previously, the Personal Rights Protection Act only banned uses of a person’s name, photograph and likeness “for purpose of advertising,” which would not include the unauthorized use of AI voices for performance purposes. The ELVIS Act does not limit liability based on context, so it would likely bar any unauthorized use, including in a documentary, song or book, among other mediums.
The federal government is also working on solutions to address publicity rights concerns. Within hours of Gov. Lee’s introduction of the ELVIS Act in Tennessee back in January, a bipartisan group of U.S. House lawmakers revealed the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (No AI FRAUD Act), which aims to establish a framework for protecting one’s voice and likeness on a federal level and lays out First Amendment protections. It is said to complement the Senate’s Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), a draft bill that was introduced last October.
While most of the music business is aligned on creating a federal right of publicity, David Israelite, president/CEO of the National Music Publishers’ Association (NMPA), warned in a speech delivered at an Association of Independent Music Publishers (AIMP) meeting in February that “while we are 100% supportive of the record labels’ priority to get a federal right of publicity…it does not have a good chance. Within the copyright community, we don’t even agree on [it]. Guess who doesn’t want a federal right of publicity? Film and TV. Guess who’s bigger than the music industry? Film and TV.”
The subject of AI voice cloning has been a controversial topic in the music business since Ghostwriter released the so-called “Fake-Drake” song “Heart On My Sleeve,” which used the AI technology without permission. In some cases, this form of AI can present novel creative opportunities — including its use for pitch records, lyric translations, estate marketing and fan engagement — but it also poses serious threats. If an artist’s voice is cloned by AI without their permission or knowledge, it can confuse, offend, mislead or even scam fans.
There is no shortage of AI voice synthesis companies on the market today, but Voice-Swap, founded and led by Dan “DJ Fresh” Stein, is trying to reimagine what these companies can be.
The music producer and technologist intends Voice-Swap to act as not just a simple conversion tool but an “agency” for artists’ AI likenesses. He’s also looking to solve the ongoing question of how to monetize these voice models in a way that gets the most money back to the artists — a hotly contested topic since anonymous TikTok user Ghostwriter employed AI renderings of Drake and The Weeknd‘s voices without their permission on the viral song “Heart On My Sleeve.”
In an exclusive interview with Billboard, Stein and Michael Pelczynski, a member of the company’s advisory board and former vp at SoundCloud, explain their business goals as well as their new monetization plan, which includes providing a dividend for participating artists and payment to artists every time a user employs their AI voice — not just when the resulting song is released commercially and streamed on DSPs. The company also reveals that it’s working on a new partnership with Imogen Heap to create her voice model, which will arrive this summer.
Trending on Billboard
Voice-Swap sees the voice as the “new real estate of IP,” as Pelczynski puts it — just another form of ownership that can allow a participating artist to make passive income. (The voice, along with one’s name and likeness, is considered a “right of publicity” which is currently regulated differently state-to-state.)
In addition to seeing AI voice technology as a useful tool to engage fans of notable artists like Heap and make translations of songs, the Voice-Swap team also believes AI voices represent a major opportunity for session vocalists with distinct timbres but lower public profiles to earn additional income. On its platform now, the company has a number of session vocalists of varying vocal styles available for use; Voice-Swap sees session vocalists’ AI voice models as potentially valuable to songwriters and producers who may want to shape-shift those voices during writing and recording sessions. (As Billboard reported in August, using AI voice models to better tailor pitch records to artists has become a common use-case for the emerging technology.)
“We like to think that, much like a record label, we have a brand that we want to build with the style of artists and the quality we represent at Voice-Swap,” says Stein. “It doesn’t have to be a specific genre, but it’s about hosting unique and incredible voices as opposed to [just popular artists].”
Last year, we saw a lot of fear and excitement surrounding this technology as Ghostwriter appeared on social media and Grimes introduced her own voice model soon after. How does your approach compare to these examples?
Pelczynski: This technology did stoke a lot of fear at first. This is because people see it as a magic trick. When you don’t know what’s behind it and you just see the end result and wonder how it just did that, there is wonder and fear that comes. [There is now the risk] that if you don’t work with someone you trust on your vocal rights, someone is going to pick up that magic trick and do it without you. That’s what happened with Ghostwriter and many others.
The one real main thing to emphasize is the magic trick of swapping a voice isn’t where the story ends, it’s where it begins. And I think Grimes in particular is approaching it with an intent to empower artists. We are, too. But I think where we differentiate is the revenue stream part. With the Grimes model, you create what you want to create and then the song goes into the traditional ecosystem of streaming and other ways of consuming music. That’s where the royalties are made off of that.
We are focused on the inference. Our voice artists get paid on the actual conversion of the voice. Not all of these uses of AI voices end up on streaming, so this is important to us. Of course, if the song is released, additional money for the voice can be made then, too. As far as we know, we are the first platform to pay royalties on the inference, the first conversion.
Stein: We also allow artists the right to release their results through any distributor they want. [Grimes’ model is partnered exclusively with TuneCore.] We see ourselves a bit like an agency for artists’ voices.
What do you mean by an “agency” for artists’ voices?
Stein: When we work with an artist at Voice-Swap we intend to represent them and license their voice models created with us to other platforms to increase their opportunities to earn income. It’s like working with an agent to manage your live bookings. We want to be the agent for the artists’ AI presence and help them monetize it on multiple platforms but always with their personal preferences and concerns in mind.
What kinds of platforms would be interested in licensing an AI voice model from Voice-Swap?
Stein: It is early days for all of the possible use cases, but we think the most obvious example at the moment is music production platforms [or DAWs, short for digital audio workstation] that want to use voice models in their products.
There are two approaches you can take [as an AI voice company.] We could say we are a SaaS platform, and the artist can do deals with other platforms themselves. But the way we approach this is we put a lot of focus into the quality of our models and working with artists directly to keep improving it. We want to be the one-stop solution for creating a model the artist is proud of.
I think the whole thing with AI and where this technology is going is that none of us know what it’s going to be doing 10 years from now. So for us, this was also about getting into a place where we can build that credibility in those relationships and not just with the artists. We want to work with labels, too.
Do you have any partnerships with DAWs or other music-making platforms in place already?
Pelczynski: We are in discussions and under NDA pending an announcement. Every creator’s workflow is different — we want our users to have access to our roster of voices wherever they feel most comfortable, be that via the website, in a DAW or elsewhere. That’s why we’re exploring these partnerships, and why we’ve designed our upcoming VST [virtual studio technology] to make that experience even more seamless. We also recently announced a partnership with SoundCloud, with deeper integrations aimed at creators forthcoming.
Ultimately, the more places our voices are available, the more opportunities there are for new revenue for the artists, and that’s our priority.
Can some music editing take place on the Voice-Swap website, or do these converted voices need to be exported?
Pelczynski: Yes, Dan has always wanted to architect a VST so that it can act like a plug-in in someone’s DAW, but we also have the capability of letting users edit and do the voice conversion and some music editing on our website using our product Stem-Swap. That’s an amazing playground for people that are just coming up. It is similar to how BandLab and others are a good quick way to experiment with music creation.
How many users does Voice-Swap have?
Pelczynski: We have 140,000 verified unique users, and counting.
Can you break down the specifics of how much your site costs for users?
Pelczynski: We run a subscription and top-up pricing system. Users pay a monthly or one-off fee and receive audio credits. Credits are then used for voice conversion and stem separation, with more creator tools on the way.
How did your team get connected with Imogen Heap, and given all the competitors in the AI voice space today, why do you think she picked Voice-Swap?
Pelczynski: We’re very excited to be working with her. She’s one of many established artists that we’re working on currently in the pipeline, and I think our partnership comes down to our ethos of trust and consent. I know it sounds trite, but I think it’s absolutely one of the cornerstones to our success.
As artificial intelligence and its potential effects on creativity, copyright and a host of other sectors continues to dominate conversation, the Universal Music Group and electronic instruments maker Roland Corporation have teamed up to create a set of guidelines that the companies published under the heading “Principles for Music Creation With AI.”
The seven principles, or “clarifying statements,” as the companies put it, are an acknowledgment that AI is certainly here to stay, but that it should be used in a responsible and transparent way that protects and respects human creators. The companies say that they hope additional organizations will sign on to support the framework. The seven principles, which can be found with slightly more detail at this site, are as follows:
— We believe music is central to humanity.
Trending on Billboard
— We believe humanity and music are inseparable.
— We believe that technology has long supported human artistic expression, and applied sustainably, AI will amplify human creativity.
— We believe that human-created works must be respected and protected.
— We believe that transparency is essential to responsible and trustworthy AI.
— We believe the perspectives of music artists, songwriters, and other creators must be sought after and respected.
— We are proud to help bring music to life.
The creation of the principles is part of a partnership between UMG and Roland that will also involve research projects, including one designed to create “methods for confirming the origin and ownership of music,” according to a press release.
“As companies who share a mutual history of technology innovation, both Roland and UMG believe that AI can play an important role in the creative process of producing music,” Roland’s chief innovation officer Masahiro Minowa said in a statement. “We also have a deep belief that human creativity is irreplaceable, and it is our responsibility to protect artists’ rights. The Principles for Music Creation with AI establishes a framework for our ongoing collaboration to explore opportunities that converge at the intersection of technology and human creativity.”
Universal has been proactive around the issue of AI in music over the past several months, partnering with YouTube last summer on a series of AI principles and an AI Music Incubator to help artists use AI responsibly, forming a strategic partnership with BandLab to create a set of ethical practices around music creation, and partnering with Endel on functional music, among other initiatives. But UMG has also taken stands to protect against what it sees as harmful uses of AI, including suing AI platform Anthropic for allegedly using its copyrights to train its software in creating new works, and cited AI concerns as part of its rationale for allowing its licensing agreement with TikTok to expire earlier this year.
“At UMG, we have long recognized and embraced the potential of AI to enhance and amplify human creativity, advance musical innovation, and expand the realms of audio production and sound technology,” UMG’s executive vp and chief digital officer Michael Nash said in a statement. “This can only happen if it is applied ethically and responsibly across the entire industry. We are delighted to collaborate with Roland, to explore new opportunities in this area together, while helping to galvanize consensus among key stakeholders across music’s creative community to promote adoption of these core principles with the goal of ensuring human creativity continues to thrive alongside the evolution of new technology.”
LONDON — Sweeping new laws regulating the use of artificial intelligence (AI) in Europe, including controls around the use of copyrighted music, have been approved by the European Parliament, following fierce lobbying from both the tech and music communities.
Members of the European Parliament (MEPs) voted in favor of the EU’s Artificial Intelligence Act by a clear majority of 523 votes for, 46 against and 49 abstentions. The “world first” legislation, which was first proposed in April 2021 and covers a wide range of AI applications including biometric surveillance and predictive policing, was provisionally approved in December, but Wednesday’s vote formally establishes its passage into law.
The act places a number of legal and transparency obligations on tech companies and AI developers operating in Europe, including those working in the creative sector and music business. Among them is the core requirement that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 provide detailed summaries of any copyrighted works, including music, that they have used to train their systems.
Trending on Billboard
Significantly, the law’s transparency provisions apply regardless of when or where in the world a tech company scraped its data from. For instance, even if an AI developer scraped copyrighted music and/or trained its systems in a non-EU country — or bought data sets from outside the 27-member state — as soon as they are used or made available in Europe the company is required to make publicly available a “sufficiently detailed summary” of all copyright protected music it has used to create AI works.
There is also a requirement that any training data sets used in generative AI music or audio-visual works are water marked, so there is a traceable path for rights holders to track and block the illegal use of their catalog.
In addition, content created by AI, as opposed to human works, must be clearly labeled as such, while tech companies have to ensure that their systems cannot be used to generate illegal and infringing content.
Large tech companies who break the rules – which govern all applications of AI inside the 27-member block of EU countries, including so-called “high risk” uses — will face fines of up to €35 million or 7% of global annual turnover. Start-up businesses or smaller tech operations will receive proportionate financial punishments.
Speaking ahead of Wednesday’s vote, which took place in Strasbourg, co-rapporteur Brando Benifei said the legislation means that “unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”
Co-rapporteur Dragos Tudorache called the AI Act “a starting point for a new model of governance built around technology.”
European legislators first proposed introducing regulation of artificial intelligence in 2021, although it was the subsequent launch of ChatGPT — followed by the high-profile release of “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, last April — that made many music executives sit up and pay closer attention to the technology’s potential impact on the record business.
In response, lobbyists stepped up their efforts to convince lawmakers to add transparency provisions around the use of music in AI – a move which was fiercely opposed by the technology industry, which argued that tougher regulations would put European AI developers at a competitive disadvantage.
Now that the AI Act has been approved by the European Parliament, the legislation will undergo a number of procedural rubber-stamping stages before it is published in the EU’s Official Journal — most likely in late April or early May — with its regulations coming into force 20 days after that.
There are, however, tiered exceptions for tech companies to comply with its terms and some of its provisions are not fully applicable for up to two-years after its enactment. (The rules governing existing generative AI models commence after 12 months, although any new generative AI companies or models entering the European market after the Act has come into force have to immediately comply with its regulations).
In response to Wednesday’s vote, a coalition of European creative and copyright organizations, including global recorded-music trade body IFPI and international music publishing trade group ICMP, issued a joint statement thanking regulators and MEPs for the “essential role they have played in supporting creators and rightsholders.”
“While these obligations provide a first step for rightsholders to enforce their rights, we call on the European Parliament to continue to support the development of responsible and sustainable AI by ensuring that these important rules are put into practice in a meaningful and effective way,” said the 18 signatories, which also included European independent labels trade association IMPALA, European Authors Society GESAC and CISAC, the international trade organization for copyright collecting societies.
As broadcasters begin assembling in Nashville this morning (Feb. 28) for the Country Radio Seminar, expect a lot of talk. About talk.
Radio personalities’ importance has been on the decline for decades. They used to pick the music on their shows. That privilege was taken away. Then many were encouraged to cut down their segues and get to the music. Then syndicated morning and overnight shows moved in to replace local talent.
But once the streaming era hit and started stealing some of radio’s time spent listening, terrestrial programmers began reevaluating their product to discover what differentiates it from streaming. Thus, this year’s CRS focus is talk.
“That’s what’s so important about this year,” says iHeartMedia talent Brooke Taylor, who voicetracks weekday shows in three markets and airs on 100 stations on weekends. “The radio on-air personality is sort of regaining their importance in the stratosphere of a particular station.”
Taylor will appear on a panel designed for show hosts — “Personal Branding: It’s Not Ego, It’s Branding!” — but it’s hardly the only element geared to the talent. Other entries include “On Air Personalities: The OG Influencers,” a research study about audience expectations of their DJs; a podcasting deep dive; and four different panels devoted to the threats and opportunities in artificial intelligence (AI).
Trending on Billboard
As it turns out, artifice is not particularly popular, according to the research study “On Air Talent and Their Roles on All Platforms,” conducted by media analytics firm Smith Geiger.
“Americans have very mixed feelings about AI,” says Smith Geiger executive vp of digital media strategies Andrew Finlayson. “This research proves that the audience is very interested in authentic content and authentic voices.”
Not to say that AI will be rejected. Sounds Profitable partner Tom Webster expects that it will be effective at matching advertisers to podcasts that fit their audience and market priorities. And he also sees it as a research tool that can assist content creation.
“If I’m a DJ and I’ve got a break coming up, and I’ve pre-sold or back-sold the same record 1,000 times, why not ask an assistant, ‘Give me something new about this record to say’?” Webster suggests. “That’s the easy kind of thing right there that can actually help the DJ do their job.”
CRS has been helping country radio do its job for more than 50 years, providing network opportunities, exposure to new artists and a steady array of educational panels that grapple with legal issues, industry trends and listener research. In the early 1980s, the format’s leaders aspired to make country more like adult contemporary, offering a predictable experience that would be easy to consume for hours in an office situation. The music, and radio production techniques, became more aggressive in the ’90s, and as technology provided a bulging wave of competitors and new ways to move around the dial, stations have been particularly challenged to maintain listeners’ attention during the 21st century.
Meanwhile, major chains have significantly cut staffs. Many stations cover at least two daily shifts with syndicated shows, and the talent that’s left often works on multiple stations in several different markets, sometimes covering more than one format. Those same personalities are expected to maintain a busy social media presence and potentially establish a podcast, too.
That’s an opportunity, according to Webster. Podcast revenue has risen to an estimated $2.5 billion in advertising and sponsorship billing, he says, while radio income has dropped from around $14 billion to $9 billion. He envisions that the two platforms will be on equal financial footing in perhaps a decade, and he believes radio companies and personalities should get involved if they haven’t already.
“It’s difficult to do a really good podcast,” Webster observes. “We talk a lot about the number of podcasts — there are a lot, and most podcasts are not great. Most podcasts are listened to by friends and family. There’s no barrier to entry to a podcast, and then radio has this stable of people whose very job it is to develop a relationship with an audience. That is the thing that they’re skilled at.”
That ’80s idea of radio as predictable background music has been amended. It’s frequently still “a lean-back soundtrack to what it is that you’re doing,” Webster suggests, though listeners want to be engaged with it.
“One of the people in the survey, verbatim, said it’s ‘a surprise box,’ ” Finlayson notes. “I think people like that serendipity that an on-air personality who really knows and understands the music can bring to the equation. And country music knowledge is one of the things that the audience craves from an on-air talent.”
It’s a challenge. Between working multiple stations, creating social media content and podcasting, many personalities are so stretched that it has become difficult to maintain a personal life, which in turn reduces their sources for new material. Add in the threat of AI, and it’s an uneasy time.
“What I see is a great deal of anxiety and stress levels, and I don’t know how we fix it,” concedes Country Radio Broadcasters executive director R.J. Curtis. “There’s just so much work put on our shoulders, it’s hard to manage that and then have a life.”
Curtis made sure that CRS addresses that, too, with “Your Brain Is a Liar: Recognizing and Understanding the Impact of Your Mental Health,” a presentation delivered by 25-year radio and label executive Jason Prinzo.
That tension is one of the ways that on-air talent likely relates to its audience — there are plenty of stressed, overbooked citizens in every market. And as tech continues to consume their lives, it naturally feeds the need for authenticity, which is likely to be a buzzword as CRS emphasizes radio’s personalities.
“Imagine having a radiothon for St. Jude with an AI talent,” Taylor says. “You’ll get a bunch of facts, but you’ll never get a tear. You’ll never get a real story. You’ll never get that shaky voice talking about somebody in your family or somebody that you know has cancer. The big thing that just will never be replaced is that emotion.”
Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.
Each year during Grammy week, members of the Association of Independent Music Publishers‘ (AIMP) gather at Lawry’s steakhouse in Beverly Hills to hear a speech from David Israelite, president and CEO of the National Music Publishers’ Association (NMPA). In it, Israelite discussed the successes of the Music Modernization Act, the new UMG TikTok licensing feud, the viability of artificial intelligence regulation, and the more.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
He started the presentation with slides showcasing the publishing revenue for 2022, divided by categories: performance (48.29% or $2.7 billion), mechanical (20.27% or $1.1 billion), synch (26.07% or $1.4 billion), and other (5.37% or $300 million). Synch, he says, is the fastest growing source of revenue.
Israelite focused much of his time on addressing the Music Modernization Act, which was passed about five years ago. “I don’t want you to forget is just how amazing the Music Modernization Act was and is for this industry,” he said. “I believe that it is the most important legislation in the history of the music business… You’re going to start to take for granted some of the things… but we had to fight and win to get this done.” He pointed to successes of the landmark law like the change in the rate standard to a willing seller, willing buyer model and its creation of the Mechanical Licensing Collective (The MLC).
Earlier this week, the MLC (and the digital licensee coordinator, DLC) began the process of its first-ever re-designation. This is a routine five-year reassessment of the organization and how well it is doing its job of administering the blanket mechanical license created by the MMA. As part of the re-designation process, songwriters, publishers and digital services are allowed to submit comments to the Copyright Office about the MLC’s performance. “Many of you will have a role in offering your opinions to the copyright office about that,” says Israelite. “The process needs to be respected and played out, but [The MLC] will be re-designated, and it is an absolute no brainer decision. There’s a lot about the MLC that I want to remind you about.”
Israelite then highlighted the organization’s “transparency,” the lack of administration fees for publishers and that the projection of 2023 revenue from streaming for recorded music ($6.3 billion) and publishing ($1.7 billion) “the split is the closest it has ever been,” attributing this, in part, to the MLC’s work.
He also addressed Grammy week’s biggest story: the UMG TikTok licensing standoff. “I’m only going to say two things about TikTok: the first is I think music is tremendously important to the business model of TikTok, and, secondly, I am just stating the fact that the NMPA model license, which many of you are using, with TikTok expires in April.” At that time, the NMPA can either re-up its model license with TikTok or walk away. If it were to pull a similar punch to what UMG has done, indie publishers could either negotiate with TikTok directly for their own license, or they could also walk away from the platform.
Later, in addressing artificial intelligence concerns, he pledged his support for the creation of a federal right of publicity, but he admitted “I want to be honest with you, it does not have a good chance.” Even though the music business is vying for its adoption, Israelite says that film and TV industry does not want it. “Within the copyright community we don’t agree… and guess who is bigger than music? Film and TV.”
Still, he believes there is merit in fighting for the proposed bill. “It might help with state legislative efforts and it raises the profile,” he said, but Israelite stated that his priority for AI regulation is to require transparency from AI companies and to keep records of how AI models are trained.