artificial intelligence
Page: 5
The three major music companies are weighing a lawsuit against AI startups Suno and Udio for allegedly training on copyrighted sound recordings, according to multiple sources.
The potential lawsuit, which would include Universal Music Group, Warner Music Group and Sony Music, would target a pair of companies that have quickly become two of the most important players in the emerging field of generative AI music. While many of its competitors focus on generating either music or lyrics or vocals, Suno and Udio both allow users to generate all three in the click of a button. Two sources said the lawsuit could come as soon as next week. Reps for the three majors, as well as Suno and Udio, did not respond to requests for comment.
Music companies, including UMG, have already filed a lawsuit against Anthropic, another major AI firm, over the use of copyrighted materials to train models. But that case dealt only with lyrics, which in many ways are legally similar to written subject matter. The new suit would deal with music and sound itself.
Trending on Billboard
Just a few months from its launch, Udio has already produced what could be considered an AI-generated hit song with “BBL Drizzy,” a parody track created by comedian King Willonius and popularized via a remix by super producer Metro Boomin. Later, the song reached new heights when it was sampled in Sexyy Red and Drake‘s song “U My Everything,” becoming the first major example of sampling an AI-generated song.
Suno has also achieved early success since its launch in December 2023. In May, the company announced via a blog post that it had raised a total of $125 million in funding from a group of notable investors, including Lightspeed Venture Partners and Nat Friedman and Daniel Gross.
Both companies, however, have drawn criticism from many members of the music business who believe that the models train on vast swathes of copyrighted material, including hit songs, without consent, compensation or credit to rights holders. Representatives for Suno and Udio have previously declined to comment on whether or not they train on protected copyrights, with Udio’s co-founders telling Billboard they simply train on “good music.”
In a recent Rolling Stone story about Suno, investor Antonio Rodriguez admitted that Suno does not have licenses for whatever music it has trained on, but he said that was not a concern to him, adding that this lack of such licenses is “the risk we had to underwrite when we invested in the company, because we’re the fat wallet that will get sued right behind these guys… Honestly, if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”
In a series of articles for Music Business Worldwide, founder of AI safety non-profit Fairly Trained, Ed Newton-Rex, found that he was able to generate music from Suno and Udio that “bears a striking resemblance to copyrighted music. This is true across melody, chords, style and lyrics,” he wrote. Both companies, however, bar users from prompting the models to copy artists’ styles by typing out sentiments like “a rock song in the style of Radiohead” or from using specific artists’ voices.
The case, if it is filed, would hinge on whether the use of unlicensed materials to train AI models amounts to copyright infringement — something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities. Content owners in many sectors, including book authors, comedians and visual artists, have all filed similar lawsuits over training.
Many AI companies argue that such training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law. Though fair use has historically allowed for things like news reporting and parody, AI firms say it applies equally to the “intermediate” use of millions of works to build a machine that spits out entirely new creations. That argument will likely be the central question in any lawsuit over AI training.
Some AI companies have taken what is often called a more “ethical” approach to AI training by working directly with companies and rights holders to license their copyrights or form official partnerships instead.
So far, the majors have embraced partnering with AI companies in this way. Already, UMG and WMG have worked with YouTube for its AI voice experiment DreamTrack; Sony has partnered with Vermillio on a remix project for The Orb and David Gilmour; WMG has worked with Edith Piaf’s estate to recreate her voice using AI for an upcoming biopic; UMG launched an AI music incubator with YouTube Music; and most recently, UMG has teamed up with SoundLabs to let their artists create their own AI voice models for personal use in the studio.
Futureverse, an AI music company co-founded by music technology veteran Shara Senderoff, has announced the alpha launch of Jen, its text-to-music AI model. Available for anyone to use on its website, Jen-1 is an AI model that can be safely used by creators, given it was trained on 40 different fully-licensed catalogs, containing about 150 million works in total.
The company’s co-founders, Senderoff and Aaron McDonald, first teased Jen’s launch by releasing a research paper and conducting an interview with Billboard in August 2023. In the interview, Senderoff explained that “Jen is spelled J-E-N because she’s designed to be your friend who goes into the studio with you. She’s a tool.”
Trending on Billboard
Some of Jen’s capabilities, available at its alpha launch, include the ability to generate 10-45 second song snippets using text prompts. To lengthen the song to a full 3:30-long duration, one can use its “continuation” feature to re-prompt and add on additional segments to the song. With a focus on “its commitment to transparency, compensation and copyright identification,” as its press release states, Jen has made much of its inner workings available to the public via its research papers, including that the model uses “latent diffusion,” the same process used by Stable Diffusion, DALL-E 2, and Imagen to create high quality images. (It is unclear which music AI models use “latent diffusion” as well, given many do not share this information publicly).
Additionally, when works are created with Jen, users receive a JENUINE indicator, verifying that the song was made with Jen at a specific timestamp. To be more specific, this indicator is a cryptographic hash that is then recorded on The Root Network blockchain.
In an effort to work more closely with the music business, Futureverse brought on APG founder/CEO Mike Caren as a founding partner in fall 2023. While its mid-2024 release date makes it a late entrant in the music AI space, the company attributes this delay to making sure its 40 licenses were secured.
For now, Futureverse has declined to comment on which songs are included in their overall training catalog for Jen, but a representative for the company says that among these 40 catalogs includes a number of production libraries. Futureverse says it is also in talks with all major music companies and will have more licenses secured soon for Jen’s beta launch, expected for September 2024. Some licensing partners could be announced as soon as 4-6 weeks from the alpha launch.
In September, Futureverse has more capabilities planned, including longer initial song results, inpainting (the process of filling in missing sections or gaps in a musical piece) and a capability the company calls its “StyleFilter,” allowing users to upload an audio snippet of an instrument or track and then change the genre or timbre of it at the click of a button.
Also in September, Futureverse plans to launch a beat marketplace called R3CORD to go along with JEN. This will let JEN users upload whatever they produce with JEN to the marketplace and sell the works to others.
So far, the U.S. Copyright Office has advised that fully AI generated creations are not protected copyrights. Instead, they are considered “public domain” works and are not eligible to earn royalties like copyrights do, but any human additions made to an AI-assisted work are able to be copyright protected. (Already, this guidance has been applied in the music business in the case of Drake and Sexyy Red’s “U My Everything” which sampled the fully-AI generated sound recording “BBL Drizzy).”
“We have reached a defining moment for the future of the music industry. To ensure artistry maintains the value it deserves, we must commit to honor the creativity and copyrights of the past, while embracing the tools that will shape the next generation of music creation,” says Senderoff. “Jen empowers creators with tools that enhance their creative process. Jen is a collaborator; a friend in the studio that works with you to ideate and iterate. As we bring Jen to market, we are partnering with music rights holders and aligning with the industry at large to deliver what we believe is the most ethical approach to generative AI.”
“We’re incredibly proud of the work that’s gone into building Jen, from our research and technology to a strategy that we continue to develop with artists’ rights top of mind,” says Caren. “We welcome an open dialogue for those who’ve yet to meet Jen. There’s a seat at the table for every rightsholder to participate in building this next chapter of the music industry.”
Universal Music Group has announced a new partnership with SoundLabs, a “responsible” AI music tools company, to provide AI music editing tools to UMG talent. This includes a real-time personalized AI voice clone plug-in, called “MicDrop,” due to launch later this summer.
The SoundLabs AI-powered music editing tools (AU, VST3, AAX) hook up to all major digital audio workstations (DAW), including Logic, ProTools, Ableton and more, to let musicians clean up their vocals, make changes, and even shape-shift their voices in the click of a button, thanks to AI technology. With MicDrop, UMG artists can create their own AI voice models, but these custom models will exclusive to their creative use, not available to the general public.
The news falls perfectly in line with the company’s “responsible” AI strategy, laid out by UMG CEO and chairman, Lucian Grainge, at the beginning of 2024. As he stated in a January memo to staff, obtained by Billboard, Grainge wrote that even though some experts viewed AI as “a looming threat,” UMG’s view was that AI would be “presenting opportunities” for the company. “Just as we did with streaming, we went out to turn those opportunities into reality.”
Trending on Billboard
He went on to explain his two-prong approach to embracing what he called UMG’s “responsible AI initiative.” First, he would lobby for “guardrails,” or public policies to protect artists’ name, image, voice, and likeness from wrongful impersonation and other “basic rules.”
Second, Grainge set out “to forge groundbreaking private-sector partnerships with AI technology companies,” which now includes its deal with SoundLabs. “In the past, new and often disruptive technology was simply released into the world, leaving the music community to develop the model by which artists would be fairly compensated and their rights protected,” Grainge continued. “In a sharp break with that past, we formed a historic relationship with our longtime partner, YouTube, that gives artists a seat at the table before any product goes to market, including helping to shape AI products’ development and a path to monetization.”
SoundLabs was co-founded by Grammy-nominated producer, composer, software developer and electronic artist BT. After a 25-year career, working with David Bowie, Madonna, Sting, Death Cab for Cutie, Peter Gabriel and Seal, he turned to software development to create new music tools to help producers innovate. Over the years, his software products — including patented audio plugins like Stutter Edit, BreakTweaker (iZotope), Polaris, Phobos (Spirfire Audio) — have generated $70 million in gross sales.
The company’s other co-founders — Joshua Dickinson, Dr. Michael Hetrick and Lacy Transeau — are all aligned with BT on an “artist-first” approach to making AI music tools. As a press release from UMG about the partnership states: “SoundLabs was founded with a foundational respect for intellectual property rights and is focused on helping artists retain creative control over their data and models.”
“It’s a tremendous honor to be working with the forward-thinking and creatively aligned Universal Music Group. We believe the future of music creation is decidedly human. Artificial intelligence, when used ethically and trained consensually, has the promethean ability to unlock unimaginable new creative insights, diminish friction in the creative process and democratize creativity for artists, fans, and creators of all stripes. We are designing tools not to replace human artists, but to amplify human creativity,” says BT.
Chris Horton, svp of strategic technology at Universal Music adds: “UMG strives to keep artists at the center of our AI strategy, so that technology is used in service of artistry, rather than the other way around. We are thrilled to be working with SoundLabs and BT, who has a deep and personal understanding of both the technical and ethical issues related to AI. Through direct experience as a singer and in partnership with many vocal collaborators, BT understands how performers view and value their voices, and SoundLabs will allow UMG artists to push creative boundaries using voice-to-voice AI to sing in languages they don’t speak, perform duets with their younger selves, restore imperfect vocal recordings, and more.”
In the mid 1990s, Jason Paige, then a struggling singer trying to break with his rock band, could make a solid living by writing Mountain Dew, Taco Bell and Pepto Bismol earworms for jingle houses that dominated the music-in-advertising industry for decades. But during an interview a few weeks ago, Paige — who ultimately became most famous as the voice of the Pokemon theme song “Gotta Catch ‘Em All” — fires up an artificial-intelligence program. Within minutes, he emails eight studio-quality, terrifyingly catchy punk, hip-hop, EDM and klezmer MP3s centered on the reporter’s name, the word Billboard and the phrase “the jingle industry and how it’s changed so much over the years.”
The point is self-evident. “Yeah,” Paige says, about the industry that once sustained him. “It is dark.”
Trending on Billboard
Today, the jingle business has evolved an assembly line of composers and performers competing to make the next “plop plop fizz fizz” into a more multifaceted relationship between artists and companies, involving brand relationships (like Taylor Swift’s long-standing Target deal); Super Bowl synchs worth hundreds of thousands of dollars; production-house music allowing brands to pick from hundreds of thousands of pre-recorded tracks; and “sonic branding,” in which the Intel bong or Netflix’s tudum are used in a variety of marketing contexts. Performers and songwriters make plenty of revenue on this kind of commercial music, and they’re far more open to doing so than they were in the corporation-skeptical ‘90s. But AI, which allows machines to make all these sounds far more cheaply and quickly for brands than human musicians could ever do, remains a looming threat.
“It definitely has the potential to be disruptive,” says Zeno Harris, a creative and licensing manager for West One Music Group, an LA company that licenses its 85,000-song catalog of original music to brands. “If we could use it as a tool, instead of replacing [musicians], that’s where I see it heading. But money dictates where the industry goes, so we’ll have to wait and see.”
This vision of an AI-dominated future in a crucial revenue-producing business is as disturbing for singers and songwriters as it is for Hollywood screenwriters, radio DJs and voiceover actors. “I just took a life-insurance-brand deal to pay for making my record,” says Grace Bowers, 17, a Nashville blues guitarist. “I’m definitely not the only one who’s doing that. Artists are turning to anyone they can to [make] money, because touring and putting out music isn’t the biggest money-maker. If Arby’s came to me and said, ‘Can you write me a jingle?,’ I’d say, Hell, yeah!’”
End of an Era
From the late 1920s, when a barbershop quartet sang “Have You Tried Wheaties?” on the air for a Minneapolis radio station, through the late ’90s, jingles dominated the music-in-advertising business. Jingle houses like Jam, JSM and Rave competed ferociously to procure contracts with major brands and advertising agencies. In the process, they created lucrative side gigs for rising talents for decades, like Luther Vandross, Patti Austin and Richard Marx, who, as jingle veteran Michael Bolton wrote in his biography, “all shook the jingle-house tree.”
“If you wrote a jingle that was going to be a national campaign, and you sang on it, you could make $50,000, and you could do three of those a year,” recalls John Loeffler, a singer-songwriter who worked on 2,500 jingle campaigns as the head of the Rave Music jingle house, before serving as a BMG executive for years.
John Stamos and Dave Coulier played jingle writers on ABC’s Full House. In this scene from “Jingle Hell,” Mary Kate or Ashley Olsen gives “Uncle Jesse” a high five.
ABC Photo Archives/Disney General Entertainment Content via Getty Images
The jingle era ended, for the most part, by the late 1990s, as TV splintered from four must-see broadcast networks to dozens of cable channels, followed by video streaming networks such as Netflix. (Steve Karmen, the ad-agency vet who wrote “Nationwide … is on your side,” authored what many consider the post-mortem for the era with his 2005 book, Who Killed the Jingle?) “I wish the young artists these days could have the opportunities I had,” Loeffler says. “It’s very different.”
Today, artists are far more likely to have broad branding relationships with corporations such as Target — Swift has appeared in commercials and the retailer has sold exclusive versions of her albums for years, and Billie Eilish, Olivia Rodrigo and others have made similar deals — than they are to write catchy ditties for TV and radio. “I personally haven’t heard the word ‘jingle’ in the lifespan of Citizen,” says Theo de Gunzburg, managing partner of Citizen, a five-year-old music house that employs studio artists to create original music for advertisers. “The clients we deal with want to be taken more seriously. The audience is more discerning.”
Citizen employs 10 full-time staff members, including five composers, to create original music for ad campaigns, and, like West One and many other music houses, maintains a library of licensable tracks. The company’s commercial work includes Adidas’ “Runner 321,” which juxtaposes Michael Jordan and Babe Ruth with clips of athletes who have Down’s Syndrome, all set to its own sports percussion tracks. Major music publishers also maintain in-house services for this kind of production music. Warner Chappell Music’s extensive online library includes a hip-hop-style track called “Ready to Fight,” described as “driving trap drums, electric guitar, bold brass, cerebral synths and go-getter male vocals.” WCM represents “specialized songwriters who like to write in short form” and “are also great at writing pop hits,” says Dan Gross, the publisher’s creative sync director, who previously was a music supervisor at top ad agency McCann.
Ba Da Ba Ba Ba
The prevailing catchphrase for music in advertising today is “sonic branding” — designing a brief musical calling card, like the Intel bong, which reflects the feel of a product and can be used in ads, promotions, app tones, TikTok and Instagram videos and even virtual-reality games. “The message of flexibility is really the key thing,” says Simon Kringel, sonic director for Unmute, a Copenhagen agency that has worked with brands such as magazine publisher Aller Media to develop catchy musical snippets that serve as what he calls “watermarks.” “The only chance we have is to make sure every time we interact with our audience, there is something that triggers this brand recall.”
Kringel avoids using the term “jingle” — “that whole approach kind of faded out,” he says — but the most memorable old-school jingles have taken on a classic-rock quality in recent years. McDonald’s 20-year-old “ba da ba ba ba,” “Nationwide … is on your side” and many others are repeated endlessly in TV-streaming commercial breaks. State Farm’s “like a good neighbor … “ remains the emperor of earworms, and the company deploys the Barry Manilow-penned jingle in strategic ways. Around 2020, says State Farm head of marketing Alyson Griffin, the insurance giant conducted a study about its own marketing assets. “They found 80% of people recognized the notes, 95% recognized the slogan — and when they put the two together, there was nearly 100% recognition,” she says. “We recently tripled down on the jingle.”
Similarly, Chili’s recently went retro, hiring Boyz II Men to update its ’90s “baby back ribs” jingle with a new advertisement. “Jingles don’t feel as modern as maybe brands want to be,” says George Felix, chief marketing officer for Chili’s Grill and Bar. “But there’s certainly still runway for jingles if you do it right.”
For now, brands are still spending copiously on advertising music of all kinds — and every once in a while, an actual jingle emerges. Temu, a new e-commerce company owned by a Chinese retail giant, will reportedly spend $3 billion on advertising this year, emphasizing its insanely catchy “ooh, ooh, Temu” jingle that aired during the Super Bowl.
Keeping an Eye on AI
Yet some in the commercial-music industry worry about what Paige’s punk-EDM-hip-hop-klezmer AI-jingle exercise portends. “Do I think the [AI] fears are overblown? No. Am I concerned? Yes,” adds Sally House, CEO of The Hit House, a 19-year-old Los Angeles company that hires composers, engineers, sound designers and performers for music in Progressive, Marvel, HBO and Amazon Prime Video spots. “We’re all waiting for copyright to save us and the government to do something about it.”
But Warner Chappell’s Shaw says his team receives requests for “custom compositions” because brands want to work with the publisher’s stable of A-list songwriters. “AI doesn’t really factor in for us in this instance,” he says.
At Mastercard, which underwent a two-year process to unveil a piece of mellow, new-age-y instrumental music as part of its sonic brand in 2019, AI may be useful for future ad campaigns. But not for creating music. Mastercard employed its own creative people, plus composers, musicologists, sound engineers and even neuroscientists, to work on its distinctive tone. “If I tell the AI engine who is the audience, what am I trying to create, what is the context, and ask it to compose something based on the Mastercard melody, it will do a very fine job,” says Raja Rajamannar, a classically trained musician who is the company’s chief marketing and communications officer. “But if I had to create the Mastercard sonic architecture, I cannot delegate it to AI. The original creation, at this stage, clearly has to come from human beings.”
Paige agrees. Even if AI ultimately takes a cut out of the space — and certainly out of the potential profits for writers — it won’t completely gut the need for real musicians making advertorial music. Classic jingles endure, he says, because they contain humanity and spirit — and because people “know there’s a human being behind the Folger’s theme song.”
Around the time that ChatGPT was first released to the public, Alex Bestall, CEO of Rightsify, a music production library, discovered that he was sitting on a new, lucrative business opportunity. “I realized all the songs and all the metadata we have around the songs had a lot of value for AI,” he says. “It was a pretty quick and easy choice for us to license our library out.”
Hundreds of thousands — if not millions — of songs or other musical content are needed to train a competitive AI model to generate music. Though a number of AI companies believe they don’t need to pay for the music that their models train on, citing “fair use,” others have taken a more musician-friendly approach by paying artists and rights holders when using their music to train AI models.
On the surface, the AI industry seems like a perfect new customer for music production libraries — affordable, pre-cleared catalogs of songs in a variety of styles. Historically, production music has been popular among advertisers, social media creators, podcasters and low-budget film and TV producers who need music to soundtrack their creations but lack the time or money to license big-name hits, which often have multiple rights holders and hefty fees. As use cases for production music have grown, so has that sector of the publishing business. As of 2022, MIDiA Research says production music is worth about $1 billion across recorded music and publishing combined.
Trending on Billboard
While many artists’ rights advocates consider licensing songs to be the “ethical” way to train an AI music model, it still poses a legitimate threat to the existing music business: “Once the [licensing transaction] is made, that model is going to end up totally competing with you for the same customers,” says Antony Demekin, CEO of Tuney, an AI music company that makes songs intended for social media creators and podcasters. “Over time this could degrade your whole business if you’re not careful about the deal you make.”
No standard contract exists for the licensing of production music for AI training. Despite the long-term risks, Bestall says he has licensed his back catalog to multiple AI companies. (Non-disclosure agreements prevent him from revealing which ones.) “Usually we license our back catalog and then we have an ongoing commitment to deliver a certain amount of music over the next two or three years of the agreement,” he says.
In the short term, these new deals between music production libraries and AI companies have actually created jobs for more human musicians. Given his new customers’ desire for more music during their deal terms, Bestall now has 24 full-time musicians — and almost 100 contractors — employed to make more music and grow Rightsify’s library, which already has over 1 million copyrights.
Lee Johnson, CEO and founder of production library Audiosparx, says AI has also allowed him to grow his business. Audiosparx is best known for licensing its catalog to train Stability AI’s Stable Audio model beginning in 2023, and Johnson says he received permission from the musicians represented in his catalog before he agreed to license their music to the AI company. Audiosparx acts as the licensor for production musicians, but unlike Bestall’s library, it does not acquire the songs in its catalog outright. “We took the deal to our artist community and about 90% of the artists opted into it,” he says. “About 10% decided to stay out of it. It was encouraging to see that much uptake because a lot of people are very passionately against [AI]. … We just felt it made more sense to have a seat on the train and ride the train to the future, rather than getting run over by [it].”
Bestall and Johnson say that, so far, partnering with AI companies has not yet affected their other business. Bestall, however, remains sober about the changes that may occur in the next few years. “I know it’s a threat to our existing business lines, but a huge opportunity for the future,” he says. “I think if people are too married to the exact business model of the past, they may struggle.” Johnson, who has pivoted Audiosparx’s business multiple times over its 20-year existence, expresses a similar view about being open to change.
Not everyone agrees. “I think this is short-term money for a long-term loss,” says Henry Phipps, an emerging film composer who previously held a full-time job writing songs for a production library. After surveying the future of AI music, he left his post to try working for an AI music start-up. Now, he’s back writing for libraries and working toward his dream of being a film/TV composer. (Phipps spoke to Billboard under the condition that his former employers’ names would be kept private.) “But you can’t blame anyone for taking the opportunity to include their music in these datasets because you’d be missing out on a short-term paycheck, and everyone else would go ahead,” he says. “It’s kind of futile to try to stop the tide. Someone will always take the deal.”
To Phipps, the way production music is made is already similar to the way AI music is prompted. “I get a brief, which feels like a prompt,” he says. “Recently, one of those prompts was for reality TV with a bunch of adjectives, and then my job is to return a piece of music. It already feels like machine work in a way.”
While “very few people aspire to be production library composers long-term,” Phipps explains, “it is a way into [the music business] — to survive, eat and pay rent and work towards projects that are more creatively fulfilling.” Phipps says working at an AI music start-up made him “more nervous” for his future opportunities as a composer for film and TV. As he sees it, AI music could augment, but not entirely replace, the compositions of blockbuster film scorers — but it might “cut off the bottom rungs of the ladder” by decreasing opportunities for young upstarts like him.
Ed Newton-Rex, former vp of audio for Stability AI and founder of non-profit Fairly Trained, which certifies AI music companies that properly license their training data, advises that “if a library wants to take a deal like this, the terms should be very well thought through.”
Particular areas of concern Newton-Rex identifies include making sure that once a deal term ends, the AI model that used it will be retired or re-trained without the library’s material. “There’s no current way to just untrain a model, but you can add clauses to control what happens after the license is over,” he says. Newton-Rex also advises libraries to be careful about licensing their data to an “open-source model” — a move he calls “totally irreversible” because it makes the model available for public use.
Still, Newton-Rex admits there is “absolutely” still risk ahead. “Musicians making production music are hugely at risk,” he says. “Ultimately, generative AI is faster, cheaper and the quality is already very good.”
Just in case, Bestall is covering his bases by launching his own AI model, Hydra II, to generate royalty-free background music for cafes, hotel lobbies and other public spaces, should his customers ever prefer AI music to his current repertoire of background library music. Still, he feels his library will always be essential: “We’re not too concerned about the possibility of AI companies saying they don’t need production music anymore. Human data is so valuable for AI.”
Apple has jumped into the race to bring generative artificial intelligence to the masses, spotlighting a slew of features Monday designed to soup up the iPhone, iPad and Mac.
And in a move befitting a company known for its marketing prowess, the AI technology coming as part of free software updates later this year is being billed as “Apple Intelligence.”
Even as it tried to put its own stamp on technology’s hottest area, Apple tacitly acknowledged during its World Wide Developers Conference that it needs help catching up with companies like Microsoft and Google, which have emerged as the early leaders in AI. Apple is leaning on ChatGPT, made by the San Francisco startup OpenAI, to make its often-bumbling virtual assistant Siri smarter and more helpful.
“All of this goes beyond artificial intelligence, it’s personal intelligence, and it is the next big step for Apple,” CEO Tim Cook said.
Trending on Billboard
Siri’s optional gateway to ChatGPT will be free to all iPhone users and made available on other Apple products once the option is baked into the next generation of Apple’s operating systems. ChatGPT subscribers are supposed to be able to easily sync their existing accounts when using the iPhone, and should get more advanced features than free users would.
To herald the alliance with Apple, OpenAI CEO Sam Altman sat in the front row of the packed conference, which was attended by developers from more than 60 countries.
“Together with Apple, we’re making it easier for people to benefit from what AI can offer,” Altman said in a statement.
Beyond allowing Siri to tap into ChatGPT’s storehouse of knowledge, Apple is giving its 13-year-old virtual assistant an extensive makeover designed to make it more personable and versatile, even as it currently fields about 1.5 billion queries a day.
When Apple releases free updates to the software powering the iPhone and its other products this fall, Siri will signal its presence with flashing lights along the edges of the display screen. It will be able to handle hundreds of more tasks — including chores that may require tapping into third-party devices — than it can now, based on Monday’s presentations.
Apple’s full suite of upcoming features will only work on more recent models of the iPhone, iPad and Mac because the devices require advanced processors. For instance, consumers will need last year’s iPhone 15 Pro or buy the next model coming out later this year to take full advantage of Apple’s AI package, although all the tools will work on Macs dating back to 2020 after that computer’s next operating system is installed.
The AI-packed updates coming to the next versions of Apple software are meant to enable the billions of people who use the company’s devices to get more done in less time, while also giving them access to creative tools that could liven things up. For instance, Apple will deploy AI to allow people to create emojis, dubbed “Genmojis” on the fly to fit the vibe they are trying to convey.
Apple’s goal with AI “is not to replace users, but empower them,” Craig Federighi, Apple’s senior vice president of software engineering, told reporters. Users will also have the option of going into the device settings to turn off any AI tools they don’t want.
Monday’s showcase seemed aimed at allaying concerns Apple might be losing its edge with the advent of AI, a technology expected to be as revolutionary as the 2007 introduction of the Phone. Both Google and Samsung have already released smartphone models touting AI features as their main attractions, while Apple has been stuck in an uncharacteristically extended sales slump.
AI mania is the main reason that Nvidia, the dominant maker of the chips underlying the technology, has seen its market value rocket from about $300 billion at the end of 2022 to about $3 trillion. The meteoric rise allowed Nvidia to surpass Apple as the second most valuable company in the U.S. Earlier this year, Microsoft also eclipsed the iPhone maker on the strength of its so-far successful push into AI.
Investors didn’t seem as impressed with Apple’s AI presentation as the crowd that came to the company’s Cupertino, California, headquarters to see it. Apple’s stock price dipped nearly 2% Monday.
Despite that negative reaction, Wedbush Securities analyst Dan Ives asserted in a research note that Apple is “taking the right path.” He hailed the presentation as a “historical” day for a company that already has reshaped the tech industry and society.
Besides pulling AI tricks out of its bag, Apple also used the conference to confirm that it will be rolling out a technology called Rich Communications Service, or RCS, to its iMessage app. The technology should improve the quality and security of texting between iPhones and devices powered by Android software, such as the Samsung Galaxy and Google Pixel.
The change, due out with the next version of iPhone’s operating software, won’t eliminate the blue bubbles denoting texts originating from iPhones and the green bubbles marking text sent from Android devices — a distinction that has become a source of social stigma.
In another upcoming twist to the iPhone’s messaging app, users will be able to write a text (or have an AI tool compose it) in advance and schedule a specific time to automatically send it.
Monday’s presentation marked the second straight year that Apple has created a stir at its developers conference by using it to usher in a trendy form of technology that other companies already had employed.
Last year, Apple provided an early look at its mixed-reality headset, the Vision Pro, which wasn’t released until early 2024. Nevertheless, Apple’s push into mixed reality — with a twist that it bills as “spatial computing” — has raised hopes that there will be more consumer interest in this niche technology.
Part of that optimism stems from Apple’s history of releasing technology later than others, then using sleek designs and slick marketing campaigns to overcome its tardy start.
Bringing more AI to the iPhone will likely raise privacy concerns — a topic that Apple has gone to great lengths to assure its loyal customers it can be trusted not to peer too deeply into their personal lives. Apple did talk extensively Monday about its efforts to build strong privacy protections and controls around its AI technology.
One way Apple is trying to convince consumers that the iPhone won’t be used to spy on them is harnessing its chip technology so most of its AI-powered features are handled on the device itself instead of at remote data centers, often called “the cloud.” Going down this route would also help protect Apple’s profit margins because AI processing through the cloud is far more expensive than when it is run solely on a device.
When Apple users make AI demands that requiring computing power beyond what’s available on the device, the tasks will be handled by what the company is calling a “private cloud” that is supposed to shield their personal data.
Apple’s AI “will be aware of your personal data without collecting your personal data,” Federighi said.
When Snowd4y, a Toronto parody rapper, released the track “Wah Gwan Delilah” featuring Drake via Soundcloud on Monday (June 3), it instantly went viral.
“This has to be AI,” one commenter wrote about the song. It was a sentiment shared by many others, particularly given the track’s ridiculous lyrics and the off-kilter audio quality of Drake’s vocals.
To date, the two rappers have not confirmed or denied the AI rumor. Though Drake posted the track on his Instagram story, it is hardly a confirmation that the vocals in question are AI-free. (As we learned during Drake’s recent beef with Kendrick Lamar, the rapper is not afraid of deep-faking voices).
Trending on Billboard
To try to get to the bottom of the “Wah Gwan Delilah” mystery, Billboard contacted two companies that specialize in AI audio detection to review the track. The answer, unfortunately, was not too satisfying.
“Our first analysis reveals SOME traces of [generative] AI, but there seems to be a lot of mix involved,” wrote Romain Simiand, chief product officer of Ircam Amplify, a French company that creates audio tools for rights holders, in an email response.
Larry Mills, senior vp of sales at Pex, which specializes in tracking and monetizing music usage across the web, also found mixed results. He told Billboard the Pex research and development team “ran the song through [their] VoiceID matcher” and that “Drake’s voice on the ‘Wah Gwan Delilah’ verse does not match as closely to Drake’s voice…[as his voice on] official releases [does], but it is close enough to confirm it could be Drake’s own voice or a good AI copy.” Notably, Pex’s VoiceID tool alone is not enough to definitively distinguish between real and AI voices, but its detection of differences between the singer/rapper’s voice on “Wah Gwan Delilah” and his other, officially released songs could indicate some level of AI manipulation.
A representative for Drake did not immediately respond to Billboard’s request for comment.
How to Screen for AI in Songs
There are multiple types of tools that are currently used to distinguish between AI-generated music and human-made music, but these nascent products are still developing and not definitive. As Pex’s Jakub Galka recently wrote in a company blog post about the topic, “Identifying AI-generated music [is] a particularly difficult task.”
Some detectors, like Ircam’s, identify AI music using “artifact detection,” meaning they detect parts of a work that are off-base from reality. A clear example of this is seen with AI-generated images. Early AI images often contained hands with extra or misshapen fingers, and some detection tools exist to pick up on these inaccuracies.
Other detectors rely on reading watermarks embedded in the AI-generated music. While these watermarks are not perceptible to the human ear, they can be detected by certain tools. Galka writes that “since watermarking is intended to be discoverable by watermark detection algorithms, such algorithms can also be used to show how to remove or modify the watermark embedded in audio so it is no longer discoverable” — something he sees as a major flaw with this system of detection.
Pex’s method of using VoiceID, which can determine if a singer matches between multiple recordings, can also be useful in AI detection, though it is not a clear-cut answer. This technology is particularly helpful when users take to the internet and release random tracks with Drake vocals, whether they’re leaked songs or AI deepfakes. With VoiceID, Pex can tell a rights holder that their voice was detected on another track that might not be an official release from them.
When VoiceID is paired with the company’s other product, Automatic Content Recognition (ACR), it can sometimes determine if a song uses AI vocals or not, but the company says there is not enough information on “Wah Gwan Delilah” to complete a full ACR check.
Parody’s Role in AI Music
Though it can’t be determined without a doubt whether “Wah Gwan Delilah” contains AI vocals, parody songs in general have played a major role in popularizing and normalizing AI music. This is especially evident on TikTok, which is replete with so-called “AI Covers,” pairing famous vocalists with unlikely songs. Popular examples of this trend include Kanye West singing “Pocket Full of Sunshine” by Natasha Bedingfield, Juice WRLD singing “Viva La Vida” by Coldplay, Michael Jackson singing “Careless Whisper” by George Michael and more.
Most recently, AI comedy music took center stage with Metro Boomin‘s SoundCloud-released track “BBL Drizzy” — which sampled an AI-generated song of the same name. The track poked fun at Drake and his supposed “Brazilian Butt Lift” during the rapper’s beef with Lamar, and in the process, it became the first major use of an AI-generated sample. Later, Drake and Sexyy Red sampled the original AI-generated “BBL Drizzy” on their own song, “U My Everything,” lifting “BBL Drizzy” to new heights.
Writing and playing a song once required some level of musical training, and recording was a technically complex process involving expensive equipment. Today, thanks to advances in artificial intelligence, a growing number of companies allow anyone in the world to skip this process and create a new song with a click of a button.
This is an exciting prospect in Silicon Valley. “It’s really easy to get investment in that sort of thing right now,” Lifescore co-founder and CTO Tom Gruber says dryly, “because everyone thinks that genAI is going to change the whole world and there will be no human creators left.” (Lifescore offers “AI-powered music generation in service of artists and rights holders.”)
Recently, however, some executives in the AI music space have been asking: How much do average users actually want to generate their own songs?
Trending on Billboard
“For whatever reason, you’re just not seeing an extreme level of adoption of these products yet among the everyday consumer,” notes one founder of an AI music company who spoke on the condition of anonymity. “Where’s the 80 to 100 million users on this stuff?”
“My hunch is no text-to-music platform will have decent retention figures yet,” says Ed Newton-Rex, who founded the AI music generation company Jukedeck and then worked at Stability AI. “It’s a moment of magic when you first try a generative music platform that works well. Then most people don’t really have a use for it.” So far, the most popular use for song generation tools appears to be making meme songs.
While there are hundreds of companies working on genAI music technology, the two that have generated the most headlines this year are Suno and Udio. The former recently announced that 10 million users have tested it in eight months, while the latter told Bloomberg that 600,000 people tried its song generation product in the first two weeks. Neither company said how many of those testers became regular users. Compare this with ChatGPT, which was estimated to gain 100 million weekly users within two months. (Though there’s chatter that growth is leveling off there, too.)
It’s early for many of these AI song generation companies, of course. That said, executives who work at the intersection of music and artificial intelligence keep wondering: How can tools that spit out new tracks on command help users?
“You can end up with a really cool tech that doesn’t really solve a real problem,” Gruber notes. “If I want something that sounds like a folk song and has a clever lyric, I’ve already got all I can eat on Spotify, right? There’s no scarcity there.”
Part of the reason for ChatGPT’s explosion, according to Antony Demekhin, co-founder of Tuney, is that it “clearly solves a bunch of problems — it can edit text for you, help you code.” (Tuney develops “ethical music AI for creative media.”) Even so, a recent multi-country survey from the Reuters Institute noted that for ChatGPT, “frequent use is rare… Many of those who say they have used generative AI have only used it once or twice.”
Within the subset of survey respondents who said they have used generative AI for “creating media,” “making audio” was the ninth most popular task, with 3% of people engaging in it. The Reuters Institute’s survey indicates that generative AI tools are more commonly used for email writing, creative writing, and coding.
“How many ‘non-musicians’ actually wanted to create music before?” asks Michael “MJ” Jacob, founder of Lemonaide, a company developing “creative AI for musicians” (around 10,000 users). “I don’t think it’s true to say ‘everyone,’ as tempting as it may be.”
Another factor that could be holding back AI audio creation, according to Diaa El All, founder and CEO of Soundful, is the number of competing companies and the difficulty of judging the quality of their output. (Soundful, which bills itself as “the leading AI Music Studio for creators,” has a user-count “in the seven figures,” El All says.) Mike Caren, founder of the label and publishing company Artist Partner Group, believes that many people will try an AI song generator “that’s not that good, have a bad experience, and not come back for six months or a year.”
The uncertain regulatory climate almost certainly inhibits the spread of AI song-making tools as well. For now, in the U.S., there are open questions about the copyrightability of AI generated tracks, potentially limiting their commercial value.
In addition, these programs need to be trained on large musical data-sets to generate credible tracks. While many prominent tech companies believe they should be allowed to undertake this process at will, labels and publishers argue that they need licensing agreements.
In other sectors, AI companies have already been sued for training on news articles and images without permission. Until the rules around training are clarified, through court cases or regulation, “corporate brands don’t want any of the risk” that comes with opening themselves up to potential litigation, explains Chris Walch, CEO and co-founder of Lifescore.
AI music leaders also believe their song generation technologies still suffer from a bad reputation. “I think the tech-lash and the stigma is really unexpected and very powerful,” the company founder says.
OpenAI CEO Sam Altman recently discussed this on the The All-in Podcast: “Let’s say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn everything about song structure and what makes a good catchy beat,” he said. “I was kind of posing that as a thought experiment to musicians, and they’re like, ‘Well, I can’t object to that on any principled basis at that point. And yet, there’s still something I don’t like about it.’” (So far, OpenAI has steered clear of the music industry.)
While the average civilian’s interest in AI song generation remains unproven, plenty of producers and aspiring artists, who are already making music on a daily basis, would like to test products that spark ideas or streamline their workflow. That’s still a large user-base — “the global total addressable market for digital music producers alone is about 66 million,” according to Splice CEO Kakul Srivastava, “and that continues to grow at a pretty rapid pace” — though it’s not the entire world’s population.
“We were all talking about how artists are screwed, because that’s a dramatic story,” Demekin says. “To me, what’s more likely is these tools just get integrated into the existing ecosystem, and people start using it as a source for material like a Splice,” which provides artists and producers sample packs full of musical building blocks.
Caren believes the AI music tools will be taken up first by musicians, next by creators looking for sound in their videos, then by fans and “music aficionados” who want to express their appreciation for their favorite artists by making something.
“The question of how far it penetrates to people who are not significant music fans?” he asks. “I don’t know.”
On May 24, Sexyy Red and Drake teamed up on the track “U My Everything.” And in a surprise — Drake’s beef with Kendrick Lamar had seemingly ended — the track samples “BBL Drizzy” (originally created using AI by King Willonius, then remixed by Metro Boomin) during the Toronto rapper’s verse.
It’s another unexpected twist for what many are calling the first-ever AI-generated hit, “BBL Drizzy.” Though Metro Boomin’s remix went viral, his version never appeared on streaming services. “U My Everything” does, making it the first time an AI-generated sample has appeared on an official release — and posing new legal questions in the process. Most importantly: Does an artist need to clear a song with an AI-generated sample?
Trending on Billboard
“This sample is very, very novel,” says Donald Woodard, a partner at the Atlanta-based music law firm Carter Woodard. “There’s nothing like it.” Woodard became the legal representative for Willonius, the comedian and AI enthusiast who generated the original “BBL Drizzy,” after the track went viral and has been helping Willonius navigate the complicated, fast-moving business of viral music. Woodard says music publishers have already expressed interest in signing Willonius for his track, but so far, the comedian/creator is still only exploring the possibility.
Willonius told Billboard that it was “very important” to him to hire the right lawyer as his opportunities mounted. “I wanted a lawyer that understood the landscape and understood how historic this moment is,” he says. “I’ve talked to lawyers who didn’t really understand AI, but I mean, all of us are figuring it out right now.”
Working off recent guidance from the U.S. Copyright Office, Woodard says that the master recording of “BBL Drizzy” is considered “public domain,” meaning anyone can use it royalty-free and it is not protected by copyright, since Willonius created the master using AI music generator Udio. But because Willonius did write the lyrics to “BBL Drizzy,” copyright law says he should be credited and paid for the “U My Everything” sample on the publishing side. “We are focused on the human portion that we can control,” says Woodard. “You only need to clear the human side of it, which is the publishing.”
In hip-hop, it is customary to split the publishing ownership and royalties 50/50: One half is expected to go to the producer, the other is for the lyricists (who are also often the artists, too). “U My Everything” was produced by Tay Keith, Luh Ron, and Jake Fridkis, so it is likely that those three producers split that half of publishing in some fashion. The other half is what Willonius could be eligible for, along with other lyricists Drake and Sexyy Red. Woodard says the splits were solidified “post-release” on Tuesday, May 28, but declined to specify what percentage split Willonius will take home of the publishing. “I will say though,” Woodard says, cracking a smile. “He’s happy.”
Upon the release of “U My Everything,” Willonius was not listed as a songwriter on Spotify or Genius, both of which list detailed credits but can contain errors. It turns out the reason for the omission was simple: the deal wasn’t done yet. “We hammered out this deal in the 24th hour,” jokes Woodard, who adds that he was unaware that “U My Everything” sampled “BBL Drizzy” until the day of its release. “That’s just how it goes sometimes.”
It is relatively common for sample clearance negotiations to drag on long after the release of songs. Some rare cases, like Travis Scott’s epic “Sicko Mode,” which credits about 30 writers due to a myriad of samples, can take years. Willonius tells Billboard when he got the news about the “U My Everything” release, he was “about to enter a meditation retreat” in Chicago and let his lawyer “handle the business.”
This sample clearance process poses another question: should Metro Boomin be credited, too? According to Metro’s lawyer, Uwonda Carter, who is also a partner at Carter Woodard, the simple answer is no. She adds that Metro is not pursuing any ownership or royalties for “U My Everything.”
“Somehow people attach Metro to the original version of ‘BBL Drizzy,’ but he didn’t create it,” Carter says. “As long as [Drake and Sexyy Red] are only using the original version [of “BBL Drizzy”], that’s the only thing that needs to be cleared,” she continues, adding that Metro is not the type of creative “who encroaches upon work that someone else does.”
When Metro’s remix dropped on May 5, Carter says she spoke with the producer, his manager and his label, Republic Records, to discuss how they could officially release the song and capitalize on its grassroots success, but then they ultimately decided against doing a proper release. “Interestingly, the label’s position was if [Metro’s] going to exploit this song, put it up on DSPs, it’s going to need to be cleared, but nobody knew what that clearance would look like because it was obviously AI.”
She adds, “Metro decided that he wasn’t going to exploit the record because trying to clear it was going to be the Wild, Wild West.” In the end, however, the release of “U My Everything” still threw Carter Woodard into that copyright wilderness, forcing them to find a solution for their other client, Willonius.
In the future, the two lawyers predict that AI could make their producer clients’ jobs a lot easier, now that there is a precedent for getting AI-generated masters royalty-free. “It’ll be cheaper,” says Carter. “Yes, cleaner and cheaper,” says Woodard.
Carter does acknowledge that while AI sampling could help some producers with licensing woes, it could hurt others, particularly the “relatively new” phenomenon of “loop producers.” “I don’t want to minimize what they do,” she says, “but I think they have the most to be concerned about [with AI].” Carter notes that using a producer’s loops can cost 5% to 10% from the producer’s side of publishing or more. “I think that, at least in the near future, producers will start using AI sampling and AI-generated records so they could potentially bypass the loop producers.”
Songwriter-turned-publishing executive Evan Bogart previously told Billboard he feels AI could never replace “nostalgic” samples (like “First Class” by Jack Harlow’s use of “Glamorous” by Fergie or “Big Energy” by Latto’s “Fantasy” by Mariah Carey), where the old song imbues the new one with greater meaning. But he said he could foresee it being a digital alternative to crate digging for obscure samples to chop up and manipulate beyond recognition.
Though the “U My Everything” complications are over — and set a new precedent for the nascent field of AI sampling in the process — the legal complications with “BBL Drizzy” will continue for Woodard and his client. Now, they are trying to get the original song back on Spotify after it was flagged for takedown. “Some guy in Australia went in and said that he made it, not me,” says Willonius. A representative for Spotify confirms to Billboard that the takedown of “BBL Drizzy” was due to a copyright claim. “He said he made that song and put it on SoundCloud 12 years ago, and I’m like, ‘How was that possible? Nobody was even saying [BBL] 12 years ago,’” Willonius says. (Udio has previously confirmed to Billboard that its backend data shows Willonius made the song on its platform).
“I’m in conversations with them to try to resolve the matter,” says Woodard, but “unfortunately, the process to deal with these sorts of issues is not easy. Spotify requires the parties to reach a resolution and inform Spotify once this has happened.”
Though there is precedent for other “public domain” music being disqualified from earning royalties, so far, given how new this all is, there is no Spotify policy that would bar an AI-generated song from earning royalties. These songs are also allowed to stay up on the platform as long as the AI songs do not conflict with Spotify’s platform rules, says a representative from Spotify.
Despite the challenges “BBL Drizzy” has posed, Woodard says it’s remarkable, after 25 years in practice as a music attorney, that he is part of setting a precedent for something so new. “The law is still being developed and the guidelines are still being developed,” Woodard says. “It’s exciting that our firm is involved in the conversation, but we are learning as we go.”
This story is included in Billboard‘s new music technology newsletter, Machine Learnings. To subscribe to this and other Billboard newsletters, click here.
Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid.
Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it. Technology and policy experts alike have promoted the use of ethical training data and partnered with groups like Fairly Trained and the Human Artistry Coalition to set a positive example for other entrants into the AI realm.
What is your biggest career moment with AI?
Trending on Billboard
Diaa El All: I’m proud of starting our product Soundful Collabs. We found a way to do it with the artists’ participation in an ethical way and that we’re not infringing on any of their actual copyrighted music. With Collabs, we make custom AI models that understand someone’s production techniques and allow fans to create beats inspired by those techniques.
Meng Ru Kuok: Being the first creation platform to support the Human Artistry Coalition was a meaningful one. We put our necks out there as a tech company where people would expect us to actually be against regulation of AI. We don’t think of ourselves as a tech company. We’re a music company that represents and helps creators. Protecting them in the future is so important to us.
Tom Clees: I’ve been extremely proud to see that our ideas are coming through in legislation like the No AI Fraud Act in the House [and] the No Fakes Act in the Senate.
The term “AI” represents all kinds of products and companies. What do you consider the biggest misconception around the technology?
Clees: There are so many people who work on these issues on Capitol Hill who have only ever been told that it’s impossible to train these AI platforms and do it while respecting copyright and doing it fairly, or that it couldn’t ever work at scale. (To El All and Kuok.) A lot of them don’t know enough about what you guys are doing in AI. We need to get [you both] to Washington now.
Kuok: One of the misconceptions that I educate [others about] the most, which is counterintuitive to the AI conversation, is that AI is the only way to empower people. AI is going to have a fundamental impact, but we’re taking for granted that people have access to laptops, to studio equipment, to afford guitars — but most places in the world, that isn’t the case. There are billions of people who still don’t have access to making music.
El All: A lot of companies say, “It can’t be done that way.” But there is a way to make technological advancement while protecting the artists’ rights. Meng has done it, we’ve done it, there’s a bunch of other platforms who have, too. AI is a solution, but not for everything. It’s supposed to be the human plus the technology that equals the outcome. We’re here to augment human creativity and give you another tool for your toolbox.
What predictions do you have for the future of AI and music?
Clees: I see a world where so many more people are becoming creators. They are empowered by the technologies that you guys have created. I see the relationship between the artist and fan becoming so much more collaborative.
Kuok: I’m very optimistic that everything’s going to be OK, despite obviously the need for daily pessimism to [inspire the] push for the right regulation and policy around AI. I do believe that there’s going to be even better music made in the future because you’re empowering people who didn’t necessarily have some functionality or tools. In a world where there’s so much distribution and so much content, it enhances the need for differentiation more, so that people will actually stand up and rise to the top or get even better at what they do. It’s a more competitive environment, which is scary … but I think you’re going to see successful musicians from every corner of the world.
El All: I predict that AI tools will help bring fans closer to the artists and producers they look up to. It will give accessibility to more people to be creative. If we give them access to more tools like Soundful and BandLab and protect them also, we could create a completely new creative generation.
This story will appear in the June 1, 2024, issue of Billboard.