artificial intelligence
Page: 7
Writing and playing a song once required some level of musical training, and recording was a technically complex process involving expensive equipment. Today, thanks to advances in artificial intelligence, a growing number of companies allow anyone in the world to skip this process and create a new song with a click of a button.
This is an exciting prospect in Silicon Valley. “It’s really easy to get investment in that sort of thing right now,” Lifescore co-founder and CTO Tom Gruber says dryly, “because everyone thinks that genAI is going to change the whole world and there will be no human creators left.” (Lifescore offers “AI-powered music generation in service of artists and rights holders.”)
Recently, however, some executives in the AI music space have been asking: How much do average users actually want to generate their own songs?
Trending on Billboard
“For whatever reason, you’re just not seeing an extreme level of adoption of these products yet among the everyday consumer,” notes one founder of an AI music company who spoke on the condition of anonymity. “Where’s the 80 to 100 million users on this stuff?”
“My hunch is no text-to-music platform will have decent retention figures yet,” says Ed Newton-Rex, who founded the AI music generation company Jukedeck and then worked at Stability AI. “It’s a moment of magic when you first try a generative music platform that works well. Then most people don’t really have a use for it.” So far, the most popular use for song generation tools appears to be making meme songs.
While there are hundreds of companies working on genAI music technology, the two that have generated the most headlines this year are Suno and Udio. The former recently announced that 10 million users have tested it in eight months, while the latter told Bloomberg that 600,000 people tried its song generation product in the first two weeks. Neither company said how many of those testers became regular users. Compare this with ChatGPT, which was estimated to gain 100 million weekly users within two months. (Though there’s chatter that growth is leveling off there, too.)
It’s early for many of these AI song generation companies, of course. That said, executives who work at the intersection of music and artificial intelligence keep wondering: How can tools that spit out new tracks on command help users?
“You can end up with a really cool tech that doesn’t really solve a real problem,” Gruber notes. “If I want something that sounds like a folk song and has a clever lyric, I’ve already got all I can eat on Spotify, right? There’s no scarcity there.”
Part of the reason for ChatGPT’s explosion, according to Antony Demekhin, co-founder of Tuney, is that it “clearly solves a bunch of problems — it can edit text for you, help you code.” (Tuney develops “ethical music AI for creative media.”) Even so, a recent multi-country survey from the Reuters Institute noted that for ChatGPT, “frequent use is rare… Many of those who say they have used generative AI have only used it once or twice.”
Within the subset of survey respondents who said they have used generative AI for “creating media,” “making audio” was the ninth most popular task, with 3% of people engaging in it. The Reuters Institute’s survey indicates that generative AI tools are more commonly used for email writing, creative writing, and coding.
“How many ‘non-musicians’ actually wanted to create music before?” asks Michael “MJ” Jacob, founder of Lemonaide, a company developing “creative AI for musicians” (around 10,000 users). “I don’t think it’s true to say ‘everyone,’ as tempting as it may be.”
Another factor that could be holding back AI audio creation, according to Diaa El All, founder and CEO of Soundful, is the number of competing companies and the difficulty of judging the quality of their output. (Soundful, which bills itself as “the leading AI Music Studio for creators,” has a user-count “in the seven figures,” El All says.) Mike Caren, founder of the label and publishing company Artist Partner Group, believes that many people will try an AI song generator “that’s not that good, have a bad experience, and not come back for six months or a year.”
The uncertain regulatory climate almost certainly inhibits the spread of AI song-making tools as well. For now, in the U.S., there are open questions about the copyrightability of AI generated tracks, potentially limiting their commercial value.
In addition, these programs need to be trained on large musical data-sets to generate credible tracks. While many prominent tech companies believe they should be allowed to undertake this process at will, labels and publishers argue that they need licensing agreements.
In other sectors, AI companies have already been sued for training on news articles and images without permission. Until the rules around training are clarified, through court cases or regulation, “corporate brands don’t want any of the risk” that comes with opening themselves up to potential litigation, explains Chris Walch, CEO and co-founder of Lifescore.
AI music leaders also believe their song generation technologies still suffer from a bad reputation. “I think the tech-lash and the stigma is really unexpected and very powerful,” the company founder says.
OpenAI CEO Sam Altman recently discussed this on the The All-in Podcast: “Let’s say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn everything about song structure and what makes a good catchy beat,” he said. “I was kind of posing that as a thought experiment to musicians, and they’re like, ‘Well, I can’t object to that on any principled basis at that point. And yet, there’s still something I don’t like about it.’” (So far, OpenAI has steered clear of the music industry.)
While the average civilian’s interest in AI song generation remains unproven, plenty of producers and aspiring artists, who are already making music on a daily basis, would like to test products that spark ideas or streamline their workflow. That’s still a large user-base — “the global total addressable market for digital music producers alone is about 66 million,” according to Splice CEO Kakul Srivastava, “and that continues to grow at a pretty rapid pace” — though it’s not the entire world’s population.
“We were all talking about how artists are screwed, because that’s a dramatic story,” Demekin says. “To me, what’s more likely is these tools just get integrated into the existing ecosystem, and people start using it as a source for material like a Splice,” which provides artists and producers sample packs full of musical building blocks.
Caren believes the AI music tools will be taken up first by musicians, next by creators looking for sound in their videos, then by fans and “music aficionados” who want to express their appreciation for their favorite artists by making something.
“The question of how far it penetrates to people who are not significant music fans?” he asks. “I don’t know.”
On May 24, Sexyy Red and Drake teamed up on the track “U My Everything.” And in a surprise — Drake’s beef with Kendrick Lamar had seemingly ended — the track samples “BBL Drizzy” (originally created using AI by King Willonius, then remixed by Metro Boomin) during the Toronto rapper’s verse.
It’s another unexpected twist for what many are calling the first-ever AI-generated hit, “BBL Drizzy.” Though Metro Boomin’s remix went viral, his version never appeared on streaming services. “U My Everything” does, making it the first time an AI-generated sample has appeared on an official release — and posing new legal questions in the process. Most importantly: Does an artist need to clear a song with an AI-generated sample?
Trending on Billboard
“This sample is very, very novel,” says Donald Woodard, a partner at the Atlanta-based music law firm Carter Woodard. “There’s nothing like it.” Woodard became the legal representative for Willonius, the comedian and AI enthusiast who generated the original “BBL Drizzy,” after the track went viral and has been helping Willonius navigate the complicated, fast-moving business of viral music. Woodard says music publishers have already expressed interest in signing Willonius for his track, but so far, the comedian/creator is still only exploring the possibility.
Willonius told Billboard that it was “very important” to him to hire the right lawyer as his opportunities mounted. “I wanted a lawyer that understood the landscape and understood how historic this moment is,” he says. “I’ve talked to lawyers who didn’t really understand AI, but I mean, all of us are figuring it out right now.”
Working off recent guidance from the U.S. Copyright Office, Woodard says that the master recording of “BBL Drizzy” is considered “public domain,” meaning anyone can use it royalty-free and it is not protected by copyright, since Willonius created the master using AI music generator Udio. But because Willonius did write the lyrics to “BBL Drizzy,” copyright law says he should be credited and paid for the “U My Everything” sample on the publishing side. “We are focused on the human portion that we can control,” says Woodard. “You only need to clear the human side of it, which is the publishing.”
In hip-hop, it is customary to split the publishing ownership and royalties 50/50: One half is expected to go to the producer, the other is for the lyricists (who are also often the artists, too). “U My Everything” was produced by Tay Keith, Luh Ron, and Jake Fridkis, so it is likely that those three producers split that half of publishing in some fashion. The other half is what Willonius could be eligible for, along with other lyricists Drake and Sexyy Red. Woodard says the splits were solidified “post-release” on Tuesday, May 28, but declined to specify what percentage split Willonius will take home of the publishing. “I will say though,” Woodard says, cracking a smile. “He’s happy.”
Upon the release of “U My Everything,” Willonius was not listed as a songwriter on Spotify or Genius, both of which list detailed credits but can contain errors. It turns out the reason for the omission was simple: the deal wasn’t done yet. “We hammered out this deal in the 24th hour,” jokes Woodard, who adds that he was unaware that “U My Everything” sampled “BBL Drizzy” until the day of its release. “That’s just how it goes sometimes.”
It is relatively common for sample clearance negotiations to drag on long after the release of songs. Some rare cases, like Travis Scott’s epic “Sicko Mode,” which credits about 30 writers due to a myriad of samples, can take years. Willonius tells Billboard when he got the news about the “U My Everything” release, he was “about to enter a meditation retreat” in Chicago and let his lawyer “handle the business.”
This sample clearance process poses another question: should Metro Boomin be credited, too? According to Metro’s lawyer, Uwonda Carter, who is also a partner at Carter Woodard, the simple answer is no. She adds that Metro is not pursuing any ownership or royalties for “U My Everything.”
“Somehow people attach Metro to the original version of ‘BBL Drizzy,’ but he didn’t create it,” Carter says. “As long as [Drake and Sexyy Red] are only using the original version [of “BBL Drizzy”], that’s the only thing that needs to be cleared,” she continues, adding that Metro is not the type of creative “who encroaches upon work that someone else does.”
When Metro’s remix dropped on May 5, Carter says she spoke with the producer, his manager and his label, Republic Records, to discuss how they could officially release the song and capitalize on its grassroots success, but then they ultimately decided against doing a proper release. “Interestingly, the label’s position was if [Metro’s] going to exploit this song, put it up on DSPs, it’s going to need to be cleared, but nobody knew what that clearance would look like because it was obviously AI.”
She adds, “Metro decided that he wasn’t going to exploit the record because trying to clear it was going to be the Wild, Wild West.” In the end, however, the release of “U My Everything” still threw Carter Woodard into that copyright wilderness, forcing them to find a solution for their other client, Willonius.
In the future, the two lawyers predict that AI could make their producer clients’ jobs a lot easier, now that there is a precedent for getting AI-generated masters royalty-free. “It’ll be cheaper,” says Carter. “Yes, cleaner and cheaper,” says Woodard.
Carter does acknowledge that while AI sampling could help some producers with licensing woes, it could hurt others, particularly the “relatively new” phenomenon of “loop producers.” “I don’t want to minimize what they do,” she says, “but I think they have the most to be concerned about [with AI].” Carter notes that using a producer’s loops can cost 5% to 10% from the producer’s side of publishing or more. “I think that, at least in the near future, producers will start using AI sampling and AI-generated records so they could potentially bypass the loop producers.”
Songwriter-turned-publishing executive Evan Bogart previously told Billboard he feels AI could never replace “nostalgic” samples (like “First Class” by Jack Harlow’s use of “Glamorous” by Fergie or “Big Energy” by Latto’s “Fantasy” by Mariah Carey), where the old song imbues the new one with greater meaning. But he said he could foresee it being a digital alternative to crate digging for obscure samples to chop up and manipulate beyond recognition.
Though the “U My Everything” complications are over — and set a new precedent for the nascent field of AI sampling in the process — the legal complications with “BBL Drizzy” will continue for Woodard and his client. Now, they are trying to get the original song back on Spotify after it was flagged for takedown. “Some guy in Australia went in and said that he made it, not me,” says Willonius. A representative for Spotify confirms to Billboard that the takedown of “BBL Drizzy” was due to a copyright claim. “He said he made that song and put it on SoundCloud 12 years ago, and I’m like, ‘How was that possible? Nobody was even saying [BBL] 12 years ago,’” Willonius says. (Udio has previously confirmed to Billboard that its backend data shows Willonius made the song on its platform).
“I’m in conversations with them to try to resolve the matter,” says Woodard, but “unfortunately, the process to deal with these sorts of issues is not easy. Spotify requires the parties to reach a resolution and inform Spotify once this has happened.”
Though there is precedent for other “public domain” music being disqualified from earning royalties, so far, given how new this all is, there is no Spotify policy that would bar an AI-generated song from earning royalties. These songs are also allowed to stay up on the platform as long as the AI songs do not conflict with Spotify’s platform rules, says a representative from Spotify.
Despite the challenges “BBL Drizzy” has posed, Woodard says it’s remarkable, after 25 years in practice as a music attorney, that he is part of setting a precedent for something so new. “The law is still being developed and the guidelines are still being developed,” Woodard says. “It’s exciting that our firm is involved in the conversation, but we are learning as we go.”
This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.
Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid.
Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it. Technology and policy experts alike have promoted the use of ethical training data and partnered with groups like Fairly Trained and the Human Artistry Coalition to set a positive example for other entrants into the AI realm.
What is your biggest career moment with AI?
Trending on Billboard
Diaa El All: I’m proud of starting our product Soundful Collabs. We found a way to do it with the artists’ participation in an ethical way and that we’re not infringing on any of their actual copyrighted music. With Collabs, we make custom AI models that understand someone’s production techniques and allow fans to create beats inspired by those techniques.
Meng Ru Kuok: Being the first creation platform to support the Human Artistry Coalition was a meaningful one. We put our necks out there as a tech company where people would expect us to actually be against regulation of AI. We don’t think of ourselves as a tech company. We’re a music company that represents and helps creators. Protecting them in the future is so important to us.
Tom Clees: I’ve been extremely proud to see that our ideas are coming through in legislation like the No AI Fraud Act in the House [and] the No Fakes Act in the Senate.
The term “AI” represents all kinds of products and companies. What do you consider the biggest misconception around the technology?
Clees: There are so many people who work on these issues on Capitol Hill who have only ever been told that it’s impossible to train these AI platforms and do it while respecting copyright and doing it fairly, or that it couldn’t ever work at scale. (To El All and Kuok.) A lot of them don’t know enough about what you guys are doing in AI. We need to get [you both] to Washington now.
Kuok: One of the misconceptions that I educate [others about] the most, which is counterintuitive to the AI conversation, is that AI is the only way to empower people. AI is going to have a fundamental impact, but we’re taking for granted that people have access to laptops, to studio equipment, to afford guitars — but most places in the world, that isn’t the case. There are billions of people who still don’t have access to making music.
El All: A lot of companies say, “It can’t be done that way.” But there is a way to make technological advancement while protecting the artists’ rights. Meng has done it, we’ve done it, there’s a bunch of other platforms who have, too. AI is a solution, but not for everything. It’s supposed to be the human plus the technology that equals the outcome. We’re here to augment human creativity and give you another tool for your toolbox.
What predictions do you have for the future of AI and music?
Clees: I see a world where so many more people are becoming creators. They are empowered by the technologies that you guys have created. I see the relationship between the artist and fan becoming so much more collaborative.
Kuok: I’m very optimistic that everything’s going to be OK, despite obviously the need for daily pessimism to [inspire the] push for the right regulation and policy around AI. I do believe that there’s going to be even better music made in the future because you’re empowering people who didn’t necessarily have some functionality or tools. In a world where there’s so much distribution and so much content, it enhances the need for differentiation more, so that people will actually stand up and rise to the top or get even better at what they do. It’s a more competitive environment, which is scary … but I think you’re going to see successful musicians from every corner of the world.
El All: I predict that AI tools will help bring fans closer to the artists and producers they look up to. It will give accessibility to more people to be creative. If we give them access to more tools like Soundful and BandLab and protect them also, we could create a completely new creative generation.
This story will appear in the June 1, 2024, issue of Billboard.
Free music streaming shouldn’t be so free, Rob Stringer, CEO of Sony Music Entertainment, suggested Wednesday during a presentation to Sony Corp. analysts and investors.
The value of paid subscription “remains incredible,” said Stringer in prepared remarks during parent company Sony’s Business Segment Meeting 2024. But recent price increases — by Spotify, Apple Music, Amazon Music, YouTube and, most recently, Pandora — have widened what Stringer called the “price gap” between free and paid streaming. Now, Sony wants streaming companies to get more from their free listeners.
“In mature markets, we hope that our partners close that gap by asking consumers using ad-supported services to additionally pay a modest fee,” said Stringer. “This would help develop this segment of the streaming business to be more than just a marketing funnel for paid subscription and still be a tremendous value for users. We have a shared interest in better monetization of free tiers. At Sony Music, we think everyone is willing to pay something for access to virtually the entire universe of music.”
Trending on Billboard
Free streaming provides an opportunity to attract paying subscribers but returns far less per listener than subscriptions. Even though Spotify has 62% more free listeners than subscribers, advertising accounted for just 10.7% of first-quarter revenue compared to 89.3% from subscriptions. Another round of price increases by Spotify this month in the U.K. and Australia portend additional price increases in the U.S. and other major markets. Further subscription price increases will widen the gap between premium and free streaming, and “even if advertising will become a better part of the story, it’s still a relatively small part of our overall revenue mix,” Spotify CEO Daniel Ek said during the April 23 earnings call.
Charging for ad-supported music would break from a long tradition of providing listeners with a free, on-demand streaming option. YouTube and Spotify are the two largest on-demand, ad-supported platforms that stream music. Amazon Music has a free tier with limited functionality. In the U.S., Pandora has about 39 million monthly active users for its ad-supported internet radio service that has less interactive capabilities than YouTube or Spotify. But paid, ad-supported streaming is common in the video world. Video on-demand services such as Hulu and Netflix offer low-price tiers with advertisements and charge higher prices to eliminate advertising altogether.
Sony Music also wants to extract more revenue from short-form video platforms such as TikTok that command huge audiences but provide relatively few royalties. “Premium-quality artistry drives the appeal of these services, with music being central to approximately 70% of videos created on them,” said Stringer. “These companies play a larger and larger role in music discovery and engagement amongst young listeners. More and more, these are primary consumption sources, and they need to be valued accordingly.”
Stringer, who does not comment during the parent company’s quarterly earnings calls, spoke and answered questions for 40 minutes about Sony Music artists, chart successes, growth opportunities and efforts in emerging markets. After highlighting Sony Music’s efforts in Latin America, India and China, he focused on the newest — and most vexing — technology on the music industry’s horizon. Artificial intelligence, he said, “represents a generational inflection point for music” and Sony Music will take “an active role” in creating a “sustainable business model” that respects the company’s rights.
But Stringer was clear that Sony Music is taking a hard line in the battle to shape AI in music. “We won’t tolerate the illicit training of AI models by reckless and unlicensed misuse of this art,” he warned. “We believe strongly that permission is the only way AI models can be trained with our content, and followed protocols of the EU AI act by sending over 700 letters to AI developers to opt our copyrights out of training.” Sony Music has also issued “over 20,000 takedowns of AI generated soundalikes over the past year,” he added, while working with legislators around the world “to shape policy and rights” on AI issues.
“With the right frameworks in place, innovation will thrive, technology, music will benefit and consumers will enjoy your experiences,” Stringer said. “We have prospered from disruptive market changes before so we are confident we can navigate this chapter successfully.”
When Sabrina Carpenter signed with the Universal Music Publishing Group (UMPG) in October 2023, she was coming off the critical and commercial success of her 2022 Island Records debut Emails I Can’t Send, a project that established her as a formidable pop hitmaker with a distinct voice and a captivating appeal. But since that album’s release, her career has launched into the stratosphere, with a string of singles — “Nonsense” off the original Emails; “Feather,” which was released on the deluxe of Emails in August 2023; and, most recently, April 2024’s “Espresso” — that have each reached higher on the charts than the last, building her into a mainstream dynamo with song-of-the-summer hitmaking potential.
It’s been “Espresso,” however, that has truly captured the zeitgeist. The song zoomed onto the Billboard Hot 100 with a No. 7 debut, eventually reaching No. 4, but has done even better globally, reaching No. 1 on the Billboard Global Excl. U.S. chart — where it spends its second week this week, establishing it as a bona fide international hit. And through the work of her label at Island and her publishing company UMPG, that’s marked the highest chart placement of Carpenter’s career — and help earned UMPG co-head of U.S. A&R and head of UMPG’s global creative group David Gray the title of Billboard’s Executive of the Week.
Trending on Billboard
Here, Gray discusses the work UMPG has done with Carpenter in the six months since bringing her into the pubco, what sets her apart as a songwriter and the company’s global outlook. “She has always had a vision for herself as an artist and songwriter,” Gray says. “It’s rewarding to see her succeed at a global level and get all of the credit she deserves.”
This week, Sabrina Carpenter’s “Espresso” spends its second week at No. 1 on the Billboard Global Excl. U.S. chart and its fifth week in the top 10 of the Hot 100. What key decision did you make to help make that happen?
Overall at UMPG, we work to support our songwriters’ ideas and decisions in any way we can, whether putting together strategic writing sessions or working to secure great synch opportunities globally.
Sabrina signed with UMPG last October. What were your first conversations like with her about her music and where she wanted to go?
Sabrina talked about how the Emails I Can’t Send album was a step up from where she was before and she was ready to take it up to the next level from there. She has always had a vision for herself as an artist and songwriter. It’s rewarding to see her succeed at a global level and get all of the credit she deserves.
What sets Sabrina apart from other pop stars as a songwriter, and how have you helped to emphasize that?
Sabrina has such a unique and brilliant songwriting voice, both lyrically and melodically. All the years of doing sessions, working hard, taking songwriting very seriously and perfecting her craft has made her not only the artist in the writing session… but she is also an A-list-level songwriter talent-wise.
For the past two years you’ve headed up UMPG’s global creative group. How has that changed how you work with songwriters, and in what ways does it help your global reach?
At UMPG, we have always recognized that there are amazing writing opportunities for songwriters outside of their own territories. The number and quality of these writing opportunities has accelerated in the last few years. The communication between territories that the Global Creative Group provides is essential to making sure our writers get the best of these opportunities.
How are you preparing to deal with AI in the publishing world?
It’s still nascent in the broader creative community, but we know AI offers opportunities and risks. We embrace AI, just as we have other technology innovations in the past, but only AI technology that is ethical and artist-centric — in other words, only if it supports songwriters and protects their rights.
Voice-Swap, an ethically-trained AI voice company, and BMAT Music Innovators, a company that indexes music usage and ownership data using machine learning, have partnered to launch a new technical certification for AI voice and music models. It is designed to verify that the audio content used to train voice models does not infringe on any […]
NASHVILLE — Ahead of the 2024 Music Biz conference, Music Business Association president Portia Sabin predicted that artificial intelligence would be the most hotly-discussed topic.
“AI is the big one that everyone’s talking about,” she told Billboard.
That premonition proved true during the current conference (held in Nashville May 13-16), as dozens of speakers across the spectrum of music, tech, legal and more discussed AI’s uncertain future in the space, and its current impact on the industry.
One such panel was “How AI and Tech Are Shaping the Business of Music” on Monday (May 13). Moderated by Elizabeth Brooks, managing partner at Better Angels Venture, the panelists—head of artist marketing and digital strategy at Friends At Work, Jeremy Gruber; senior vp of product and technology at MAX, Jeff Rosenfeld; MADKAT founder Maddy Sundquist; and singer-songwriter Stephen Day — discussed the emergence of AI in music, some of the concerns surrounding its potential impact on artist creativity, and how artists can maintain an authentic connection to their fans.
Trending on Billboard
As the sole artist on the panel, Day kicked off the AI portion of the discussion and countered that, despite the recent uptick in the use of generative AI in popular music — most recently with Drake on “Taylor Made Freestyle,” the diss track in which he uses AI to recreate Snoop Dogg and the late Tupac Shakur’s voices, which has since been taken down after the Shakur estate threatened to take legal action — he’s not concerned about generative AI’s emergence. “Overall, I’m not really scared about it because technology has always advanced,” he said. “The human with the heart and the soul is what makes it important.”
Rosenfeld agreed, adding that “technology continually upends the business of music,” pointing to social media as an example of something that changed digital marketing strategies for artists and labels. One group that could be at risk though, he said, are artists that people don’t have a direct connection with, like film/TV composers. “It’s the personal connection [that fans are after]. It is the person and their story behind the music that people relate to,” he said. “And that’s why it’s important to have a relationship with your fans.”
Rosenfeld isn’t the first exec to note that risk for artists who make instrumental music. In a 2023 Billboard story, Oleg Stavitsky, co-founder/CEO of AI-driven functional sound company Endel, pointed to “functional music” (that is, a type of audio “not designed for conscious listening”) as an area of focus for their firm. While the company isn’t in the business of making hits, it’s focused on making music that promotes sleep or relaxation (lo-fi music, ambient electronics, etc.) with help from AI tools. Another company, LifeScore, which uses AI to “create unique, real-time soundtracks for every journey,” recruited James Blake to create an AI ambient soundtrack titled Wind Down.
While that’s a threat to that corner of the market, the panelists were largely optimistic, albeit cautiously, about AI’s future impact.
“AI is not our overlord today,” Brooks said. Added Rosenfeld, “It’s enabled small businesses to expand… It’s destabilizing, but at the same time empowering.”
At a separate panel on Wednesday (May 15) titled “How AI Is Changing the Way We Market, Promote & Sell Music,” the speakers also had a positive outlook on AI in the industry. Moderated by co-founder and CEO of 24/7 Artists, Yudu Gray, Jr., the panel featured chief product officer of SymphonyOS, Chuka Chase; head of communications & creator insights at BandLab Technologies, Dani Deahl; and Visionary Rising founder LaTecia Johnson.
Chase said that his company has used AI to streamline the process of finding and growing an audience for artists. One way has been to use AI to build a setlist for an emerging artist’s first tour. Chase explained that his team was able to harness AI by sending out emails and putting out polls in order to gain insight into what that artist should perform in each city. “We went into the CRM and blasted emails to put out polls, a microsite asking what songs [that artist] should perform. After a couple hours we got around 20,000 responses,” he said, adding that he could then plug that data into GPT and make a setlist based on the most-requested songs.
For Deahl, who’s also a DJ and music producer, AI has helped with delegating various administrative tasks. “One of the biggest hurdles that artists now have to overcome is they don’t have to just worry about the creative components… They have to worry about all these different facets of their business.” She argues that any tool that gives her the ability to “cut out the BS” and give her the time to focus on the creative process is the best way to help her amplify her work. “Not every artist is built to be an entrepreneur,” she said.
Several companies are beginning to launch similar “AI assistants” for these kinds of admin roles. Last month, for example, Venice Music launched a new tool called Co-Manager “to educate artists on the business and marketing of music, so artists can spend more time focused on their creative vision,” Suzy Ryoo, co-founder and president of the company, said in a statement at the time. The idea is to, as Deahl said, give artists more time to be artists.
To that end, as AI tools become more prominent, the humans on an artist’s team are now more crucial than ever. While AI tools perhaps shrink the size of an artist’s team due to their functionality, Deahl doesn’t envision a world in which human roles are fully replaced. “I don’t worry about replacement when it comes to the people I engage with,” she said. “It would be a really lonely road for me as an artist if the only things that I relied on were AI chatbots or tools that tell me what my strategy should be. I need human feedback.”
Sony Music warned tech companies not to mine its recordings, compositions, lyrics and more “for any purposes, including in relation to training, developing, or commercializing any [artificial intelligence] system,” in a declaration published on the company’s website on Thursday (May 16).
In addition, according to a letter obtained by Billboard, Sony Music is in the process of reaching out to hundreds of companies developing generative AI tech, as well as streaming services, to drive home this message directly.
The pointed letter notes that “unauthorized use of SMG Content in the training, development or commercialization of AI systems deprives SMG Companies and SMG Talent of control over and appropriate compensation for the uses of SMG Content, conflicts with the normal exploitation of those works, unreasonably prejudices our legitimate interests, and infringes our intellectual property and other rights.”
Trending on Billboard
GenAI models require “training” — “a computational process of deconstructing existing works for the purpose of modeling mathematically how [they] work,” as Google explained last year in comments to the U.S. Copyright Office in October. “By taking existing works apart, the algorithm develops a capacity to infer how new ones should be put together.” Through inference, these models eventually can generate credible-sounding hip-hop beats, for example.
Whether a company needs permission before undertaking the training process on copyrighted works is already the subject of a fierce debate, leading to lawsuits in several industries. In October, Universal Music Group (UMG) was among the music companies that sued AI startup Anthropic, alleging that “in the process of building and operating AI models, [the company] unlawfully copies and disseminates vast amounts of copyrighted works.”
Although these cases will likely set precedent for AI training practices in the U.S., the courts typically move at a glacial pace. In the meantime, some technology companies seem set on training their genAI tools on large troves of recordings without permission.
“Based on recent Copyright Office filings it is clear that the technology industry and speculative financial investors would like governments to believe in a very distorted view of copyright,” Dennis Kooker, Sony Music’s president of global digital business, said during the Artificial Intelligence Insight Forum in Washington, D.C. in November. “One in which music is considered fair use for training purposes and in which certain companies are permitted to appropriate the entire value produced by the creative sector without permission, and to build huge businesses based on it without paying anything to the creators concerned.”
While Kooker was adamant during his testimony that training for genAI music tools “cannot be without consent, credit and compensation to the artists and rightsholders,” he also pointed out that Sony has “roughly 200 active conversations taking place with start-ups and established players about building new products and developing new tools.”
“These discussions range from tools for creative or marketing assistance, to tools that potentially give us the ability to better protect artist content or find it when used in an unauthorized fashion, to brand new products that have never been launched before,” he continued.
Sony’s letter to genAI companies this week ended on a similar note: “We invite you to engage with us and the music industry stakeholders we represent to explore how your AI Act copyright policy may be developed in a manner that ensures our and SMG Talent’s rights are respected.”
For the last two years, I’ve poured my angst, joy, wonder and grief into a musical project called Current Dissonance.
I read the news voraciously, and every few days, a story resonates with particular thunder. I sit with that story in mind, as inspiration and intention, and then record a piece of solo piano music, composed on the spot, in reaction. Most often, Current Dissonance subscribers receive the new track within minutes of its completion.
I love engaging with this project. It’s become a cathartic practice and wordless diary, connective tissue when so much around us seems to be fracturing, something full of guts and blood and soul that feels deeply personal and unapologetically human.
Given all that, I find it both thrilling and jarring that AI music creation has advanced to a point where well-crafted algorithms could largely take my place as the brain, heart and fingers behind this project.
At its core, the fusion of AI and music creation isn’t new, and its evolution from tweaky curiosity to full-on cultural juggernaut has been fascinating to watch. My first exposure came via Digital Audio Workstations (DAWs) — the complex software suites used to produce nearly all new music. Years ago, I experimented with an early AI feature that allowed virtual drummers to bang out rudimentary grooves tailored to my songs-in-progress; another utility let me stretch and distort audio samples in subtle or grotesque ways. Later, I wrote coverage of a startup that used machine learning to auto-generate soundtracks for video.
Trending on Billboard
Some of those legacy AI utilities felt promising but imperfect, others inelegant to the point of unusability. But they all showed the potential of what was to come. And it’s not hard to see that what was coming has now arrived — with the force of a freight train.
Welcome To The New A(I)ge
Examples of AI’s growth spurt permeate the music world. For cringe-worthy fun, check out There I Ruined It, where AI Simon & Garfunkel sing lyrics from “Baby Got Back” and “My Humps” to the melody of “Sound of Silence.” Then visit Suno, where single-sentence prompts yield remarkably realistic songs — fully-produced, with customized lyrics — in electronica, folk, metal and beyond. Open up Logic Pro and hear just how big and vivid its AI mastering utility can make a track sound in seconds. These developments are just the overture, and there’s no technical reason why a vast array of musical projects — including my own — couldn’t be AI-ified in the movements to come.
For example, I’ve created 154 short piano pieces for Current Dissonance, as of the writing of this article. Hours more of my piano work are publicly accessible. An AI model could be trained on those recordings to look for patterns in the notes I play, the chord voicings I choose, the ways I modulate volume and manipulate rhythms — all the subtle choices that make me sound like me, as opposed to anyone else sitting at a piano.
The algorithm would also need to learn the relationship between each Current Dissonance movement and the news article it reinterprets, building a map of correlations between facets of the written story and recorded music. Do Locrian-mode motifs in 7/8 permeate my playing when I’m reflecting on South Asian politics — and are C#s twice as likely to appear when I reimagine headlines that are less than four words long? I have no idea, but a well-trained AI model would parse those potential patterns and more.
In the end, my hypothetical AI Current Dissonance would function like Suno does for popular music formats. To hear a Michael Gallant-style piano reaction to anything, type in your prompt and see what erupts.
While this may sound like a daydream, the key technical bedrock exists right now, or will exist soon. Following a similar development pathway, I doubt it’ll be long before we can also hear how Tchaikovsky might have reacted symphonically to war in Ukraine, or how McCoy Tyner could have soloed over “Vampire,” “Believer,” or any other tune written after his death. Elvis Presley reimagining Elvis Costello, Billie Holiday reinterpreting Billie Eilish, John Philip Sousa composing marches to honor nations that didn’t exist when he did — the possibilities are stunning.
But where does all of this innovation leave today’s music professionals?
Old Theme, New Variations
Recent conversations with fellow music-makers have yielded gallows humor, dark jokes about obsolescence at the hands of the robots — but also a sense of resilience, the feeling that we’ve heard this tune before.
Take for example the advancement of synthesizer technology, which has certainly constricted market demand for musicians who make their living playing in recording sessions. And the ubiquitousness of affordable, powerful DAWs like Pro Tools, Ableton Live and GarageBand has snuffed out a generation of commercial studios and their engineers’ careers. Those losses are real and devastating, but they’re only part of the story.
Inventing, programming and performing with synthesizers has become a thriving musical specialty of its own, creating new professional opportunities amidst ashes of the old. The same can be said for the brilliant minds who make every new bit of music software even more amazing. And democratized music production due to GarageBand and its ilk has made possible the global ascent of DIY artists who could never have afforded to work in traditional studios.
As the duality of loss and regrowth takes hold in the AI era, everyone involved in music must amplify the latter, while keeping the former as muted as possible. There are key steps that communities and countries alike can take to ensure that AI music technology boosts existing creators and inspires new ones — that it enhances human creativity more than it cuts us down.
Shedding for the Future
The biggest error music-makers can commit is pretending that nothing will change. When it comes to AI, willful ignorance will lead to forced irrelevance. Let’s avoid that future.
Instead, I encourage all music-makers to learn as much about AI music technology as possible. These tools are not secret weapons, siloed away for the rich and privileged; with an internet connection and a few hours, any music-maker can gain at least a high-level look at what’s going on. It’s incumbent on all of us to learn the landscape, learn the tools and see how they can make our human music-making better.
Music-makers must also double down on human connections. For artists with followings large or small, this means rededicating ourselves to building meaningful relationships with audiences, strengthening the human connection that AI can only approximate. Taking time to greet listeners at each performance, making space to bond with superfans — just as in-person concerts will grow in meaning as fiction and reality become increasingly indistinguishable in the digital world, so will the importance of face-to-face conversations, handshakes and high-fives, hugs between artists and those who see beauty in their music.
For music-makers who spend their time in studio settings, reinforcing connections with clients and collaborators will also be key. While I currently rely on AI-fueled music tools in some contexts, I cherish every opportunity to team up with fellow humans, because I’m blessed to work with great people who elevate and inspire me. That’s another vital connection that AI cannot now — or hopefully ever — replace.
It Takes a Movement
Music-makers, those who support them in commerce and industry, and those who weave music into their lives as listeners — all of us must help build a movement that cherishes human creativity lifted through technology.
There’s already hard evidence that protecting artists’ digital integrity is an all-too-rare consensus issue within American politics; check out Tennessee’s bipartisan ELVIS Act for more. Music-makers in any community can push their local and national leaders to ride Tennessee’s momentum and reproduce its successes against AI abuse. As a voting member of the Recording Academy, I’m proud of the organization’s pro-human activism efforts when it comes to federal copyright law and other vital issues. Every music-related entity should make noise in favor of similar protections.
Granted, even the smartest laws will only go so far. AI music technology is so accessible that trolls and bad actors will likely be able to manipulate musicians’ voices, privately and anonymously, without suffering real consequences — a dynamic unlikely to change anytime soon. But the more our culture brands such exploitative recordings as tasteless and taboo, the better. We cultivate respect for human creators when we marginalize the consumption of non-consensual, AI-smelted musical plastic.
Consent is one key; control is another. While industry executives, music-makers of all shapes and flavors, influencers and lawmakers must collectively insist that musicians remain masters of their own voices, I recommend we go further by empowering artists themselves to take the lead.
It would be brilliant, and fair, for Madonna or Janelle Monáe, Juanes or Kendrick Lamar, to release interactive AI albums that they, the artists, control. Such properties could allow fans to create custom AI tracks from raw material exclusively recorded for that purpose. Under no circumstances should AI assets be leveraged for any use without the explicit permission — and compensation — of the humans responsible for the music on which those algorithms were trained.
…And I Feel Fine
In the face of AI’s explosion, we must remember to stay curious, hungry and optimistic. Investors, inventors and tech companies must look beyond novelty song creation as the technology’s highest musical goal; I can’t imagine how far AI will go when applied to creating new instruments, for example. Much of the music I make is improvisational, formed in my brain milliseconds before it’s realized by my fingers. How amazing would it be to jam with live band members — as well as an AI algorithm trained to create instant orchestrations, in real time as I play, using a never-before-heard chimera of Les Paul overdrive, volcanic glass vibraphone and a grizzly bear roaring?
AI presents massive challenges to human creators of any sort, but if we proceed with thoughtfulness and respect, new innovations will lift music-making communities everywhere. I for one will be thrilled to learn who the first Beethoven, Beyoncé and Robert Johnson of the AI era will be, and to hear the masterpieces they create.
Michael Gallant is a musician, composer, producer, and writer living in New York City.
A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop artificial intelligence and place safeguards around it, writing in a new report released Wednesday that the U.S. needs to “harness the opportunities and address the risks” of the quickly developing technology.
The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report.
Trending on Billboard
While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.
“It’s complicated, it’s difficult, but we can’t afford to put our head in the sand,” said Schumer, D-N.Y., who convened the group last year after AI chatbot ChatGPT entered the marketplace and showed that it could in many ways mimic human behavior.
The group recommends in the report that Congress draft “emergency” spending legislation to boost U.S. investments in artificial intelligence, including new research and development and new testing standards to try and understand the potential harms of the technology. The group also recommended new requirements for transparency as artificial intelligence products are rolled out and that studies be conducted into the potential impact of AI on jobs and the U.S. workforce.
Republican Sen. Mike Rounds, a member of the group, said the money would be well spent not only to compete with other countries who are racing into the AI space but also to improve Americans’ quality of life — supporting technology that could help cure some cancers or chronic illnesses, he said, or improvements in weapons systems could help the country avoid a war.
“This is a time in which the dollars we put into this particular investment will pay dividends for the taxpayers of this country long term,” he said.
The group came together a year ago after Schumer made the issue a priority — an unusual posture for a majority leader — and brought in Democratic Sen. Martin Heinrich of New Mexico, Republican Sen. Todd Young of Indiana and Rounds of South Dakota.
As the four senators began meeting with tech executives and experts, Schumer said in a speech over the summer that the rapid growth of artificial intelligence tools was a “moment of revolution” and that the government must act quickly to regulate companies that are developing it.
Young said the development of ChatGPT, along with other similar models, made them realize that “we’re going to have to figure out collectively as an institution” how to deal with the technology.
“In the same breath that people marveled at the possibilities of just that one generative AI platform, they began to hypothesize about future risks that might be associated with future developments of artificial intelligence,” Young said.
While passing legislation will be tough, the group’s recommendations lay out the first comprehensive road map on an issue that is complex and has little precedent for consideration in Congress. The group spent almost a year compiling the list of policy suggestions after talking privately and publicly to a range of technology companies and other stakeholders, including in eight forums to which the entire Senate was invited.
The first forum in September included Elon Musk, CEO of Tesla and owner of X, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai.
Schumer said after the private meeting that he had asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hand.”
The four senators are pitching their recommendations to Senate committees, which are then tasked with reviewing them and trying to figure out what is possible. The Senate Rules Committee is already moving forward with legislation, voting on Wednesday on three bills that would ban deceptive AI content used to influence federal elections, require AI disclaimers on political ads and create voluntary guidelines for state election offices that oversee candidates.
Schumer, who controls the Senate’s schedule, said those election bills were among the chamber’s “highest priorities” this year. He also said he planned to sit down with House Speaker Mike Johnson, who has expressed interest in looking at AI policy but has not said how he would do that.
Some experts warn that the U.S. is behind many other countries on the issue, including the EU which took the lead in March when they gave final approval to a sweeping new law governing artificial intelligence in the 27-nation bloc. Europe’s AI Act sets tighter rules for the AI products and services deemed to pose the highest risks, such as in medicine, critical infrastructure or policing. But it also includes provisions regulating a new class of generative AI systems like ChatGPT that have rapidly advanced in recent years.
“It’s time for Congress to act,” said Alexandra Reeve Givens, CEO of the Center for Democracy & Technology. “It’s not enough to focus on investment and innovation. We need guardrails to ensure the responsible development of AI.”
The senators emphasized balance between those two issues, and also the urgency of action.
“We have the lead at this moment in time on this issue, and it will define the relationship between the United States and our allies and other competing powers in the world for a long time to come,” Heinrich said.