generative AI
Senator Peter Welch (D-Vt.) introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act on Monday in the latest effort to shield songwriters, musicians and other creators from the unauthorized use of their works in training generative AI models.
If successful, the legislation would grant copyright holders access to training records, enabling them to verify if their creations were used — a process similar to methods combating internet piracy.
Trending on Billboard
“This is simple: if your work is used to train A.I., there should be a way for you, the copyright holder, to determine that it’s been used by a training model, and you should get compensated if it was,” said Welch. “We need to give America’s musicians, artists, and creators a tool to find out when A.I. companies are using their work to train models without artists’ permission.”
Creative industry leaders have long voiced concerns about the opaque practices of AI companies regarding the use of copyrighted materials. Many of these startups and firms do not disclose their training methods, leaving creators unable to determine whether their works have been incorporated into AI systems. The TRAIN Act directly addresses this so-called “black box” problem, aiming to introduce transparency and accountability into the AI training process.
Welch’s bill is just the latest development in the battle between rights holders and generative AI. In May, Sony Music released a statement warning more than 700 AI companies not to scrape the company’s copyrighted data, while Warner Music released a similar statement in July. That same month in the U.S. Senate, an anti-AI deepfakes bill dubbed the No FAKES Act was introduced by a bipartisan group of senators. In October, thousands of musicians, composers, international organizations and labels — including all three majors — signed a statement opposing AI companies and developers using their work without a license for training generative AI systems.
During a Senate Judiciary Committee hearing earlier this month, U.S. Copyright Director Shira Perlmutter emphasized the importance of transparency to protect copyrighted materials, saying that without insight into how AI systems are trained, creators are left in the dark about potential misuse of their work, undermining their rights and earnings.
Sen. Welch has been active in promoting consumer protections and safety around emerging technologies, including AI. His previous initiatives include the AI CONSENT Act, which mandates that online platforms obtain informed consent from users before utilizing their data for AI training, and the Digital Platform Commission Act, which proposes the establishment of a federal regulatory agency for digital platforms.
The TRAIN Act left the station with immediate widespread support from creative organizations, including the RIAA, ASCAP, BMI, SESAC, SoundExchange and the American Federation of Musicians, among others.
Several music industry leaders praised the TRAIN Act for its potential to balance innovation with an eye on respecting creators’ rights. Mitch Glazier, RIAA chairman & CEO, highlighted its role in ensuring creators can pursue legal recourse when their works are used without permission. Todd Dupler, the Recording Academy’s chief advocacy and public policy officer, and Mike O’Neill, the CEO of BMI, echoed these sentiments, stressing the bill’s importance in preventing misuse and enabling creators to hold AI companies accountable.
David Israelite, president & CEO of the National Music Publishers’ Association, pointed to the TRAIN Act as a vital measure to close regulatory gaps and ensure transparency in AI practices, while John Josephson, chairman and CEO of SESAC Music Group, praised its dual approach of promoting responsible innovation while protecting creators.
Additional endorsements came from SoundExchange CEO Michael Huppe, who stressed the need for creators to understand how their works are being utilized in AI systems, Elizabeth Matthews, CEO of ASCAP, who stressed the need for artists to be fairly compensated, and Ashley Irwin, president of the Society of Composers & Lyricists, who emphasized the bill’s role in safeguarding the rights of composers and songwriters.
Select Music Industry Reactions to the TRAIN Act:
Mitch Glazier, RIAA: “Senator Welch’s carefully calibrated bill will bring much needed transparency to AI, ensuring artists and rightsholders have fair access to the courts when their work is copied for training without authorization or consent. RIAA applauds Senator Welch’s leadership and urges the Senate to enact this important, narrow measure into law.”
David Israelite, NMPA: “We greatly appreciate Senator Welch’s leadership on addressing the complete lack of regulation and transparency surrounding songwriters’ and other creators’ works being used to train generative AI models. The TRAIN Act proposes an administrative subpoena process that enables rightsholders to hold AI companies accountable. The process necessitates precise record-keeping standards from AI developers and gives rightsholders the ability to see whether their copyrighted works have been used without authorization. We strongly support the bill which prioritizes creators who continue to be exploited by unjust AI practices.”
Elizabeth Matthews, ASCAP: “The future of America’s vibrant creative economy depends upon laws that protect the rights of human creators. By requiring transparency about when and how copyrighted works are used to train generative AI models, the TRAIN Act paves the way for creators to be fairly compensated for the use of their work. On behalf of ASCAP’s more than one million songwriters, composer and music publisher members, we applaud Senator Welch for his leadership.”
Mike O’Neill, BMI: “Some AI companies are using creators’ copyrighted works without their permission or compensation to ‘train’ their systems, but there is currently no way for creators to confirm that use or require companies to disclose it. The TRAIN Act will provide a legal avenue for music creators to compel these companies to disclose those actions, which will be a step in the right direction towards greater transparency and accountability. BMI thanks Senator Welch for introducing this important legislation.”
John Josephson, SESAC: “SESAC applauds the TRAIN Act, which clears an efficient path to court for songwriters whose work is used by AI developers without authorization or consent. Senator Welch’s narrow approach will promote responsible innovation and AI while protecting the creative community from unlawful scraping and infringement of their work.”
Michael Huppe, SoundExchange: ”As artificial intelligence companies continue to train their generative AI models on copyrighted works, it is imperative that music creators and copyright owners have the ability to know where and how their works are being used. The Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act would provide creators with an important and necessary tool as they fight to ensure their works are not exploited without the proper consent, credit, or compensation.”
Todd Dupler, The Recording Academy: “The TRAIN Act would empower creators with an important tool to ensure transparency and prevent the misuse of their copyrighted works. The Recording Academy® applauds Sen. Welch for his leadership and commitment to protecting human creators and creativity.”
Stability AI, one of the world’s most prominent generative AI companies, has joined the likes of Google, Meta and Open AI in creating a model that generates clips of music and sound. Called “Stable Audio,” the new text-to-music generator was trained on sounds from the music library Audio Sparx.
Stability touts its new product as the first music generation product that creates high-quality, 44.1 kHz music for commercial use though a process called “latent diffusion” — a process that was first introduced for images through Stable Diffusion, the company’s marquee product. Stable Audio uses sound conditioned on text metadata as well as audio file duration and start time to allow for greater control over the content of the generated audio.
By typing prompts like “post-rock, guitars, drum kit, bass, strings, euphoric, up-lifting, moody, flowing, raw, epic, sentimental, 125 BPM,” users can create up-to 20 seconds of sound through its free tier, or up-to 90 seconds of sound via its pro subscription.
In its announcement, the company touts Stable Audio as a tool for musicians “seeking to create samples to use in their own music.” The music generated could also be used to soundtrack advertisements and creator content, among other commercial applications.
“As the only independent, open and multimodal generative AI company, we are thrilled to use our expertise to develop a product in support of music creators,” says Emad Mostaque, CEO of Stability AI. “Our hope is that Stable Audio will empower music enthusiasts and creative professionals to generate new content with the help of AI, and we look forward to the endless innovations it will inspire.”
This is not the AI giant’s first foray into audio and music AI. The company already has an open-source generative audio label, HarmonAI, which is designed to create accessible and playful music production tools “by musicians for musicians.” Parts of the HarmonAI team, including Ed Newton Rex, its vp of product, took part in the designing of Stable Audio.
Generative AI is hot right now. Over the last several years, music artists and labels have opened up to the idea of AI as an exciting new tool. Yet when Dall-E 2, Midjourney and GPT-3 opened up to the public, the fear that AI would render artists obsolete came roaring back.
I am here from the world of generative AI with a message: We come in peace. And music and AI can work together to address one of society’s ongoing crises: mental wellness.
While AI can already create visual art and text that are quite convincing versions of their human-made originals, it’s not quite there for music. AI music might be fine for soundtracking UGC videos and ads. But clearly we can do much better with AI and music.
There’s one music category where AI can help solve actual problems and open new revenue streams for everyone, from music labels, to artists, to DSPs. It’s the functional sound market. Largely overlooked and very lucrative, the functional sound market has been steadily growing over the past 10 years, as a societal need for music to heal increases across the globe.
Sound is powerful. It’s the easiest way to control your environment. Sound can change your mood, trigger a memory, or lull you to sleep. It can make you buy more or make you run in terror (think about the music played in stores intentionally to facilitate purchasing behavior or the sound of alarms and sirens). Every day, hundreds of millions of people are self-medicating with sound. If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 “functional” playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.
This is the market UMG chief Sir Lucian Grainge singled out in his annual staff memo earlier this year. He’s not wrong: DSPs are swarmed with playlists consisting of dishwasher sounds and white noise, which divert revenue and attention from music artists. Functional sound is a vast ocean of content with no clear leader or even a clear product.
The nuance here is that the way people consume functional sound is fundamentally different from the way they consume traditional music. When someone tunes into a sleep playlist, they care first and foremost if it works. They want it to help them fall asleep, as fast as possible. It’s counterintuitive to listen to your favorite artist when you’re trying to go to sleep (or focus, study, read, meditate). Most artist-driven music is not scientifically engineered to put you into a desired cognitive state. It’s designed to hold your attention or express some emotion or truth the artist holds dear. That’s why ambient music — which, as Brian Eno put it, is as ignorable as it is interesting — had its renaissance moment a few years ago, arguably propelled by the mental health crisis.
How can AI help music artists and labels win back market share from white noise and dishwasher sounds playlists? Imagine that your favorite music exists in two forms: the songs and albums that you know and love, and a functional soundscape version that you can sleep, focus, or relax to. The soundscape version is produced by feeding the source stems from the album or song into a neuroscience-informed Generative AI engine. The stems are processed, multiplied, spliced together and overlaid with FX, birthing a functional soundscape built from the DNA of your favorite music. This is when consumers finally have a choice: fall asleep or study/read/focus to a no-name white-noise playlist, or do it with a scientifically engineered functional soundscape version of their favorite music.
This is how Generative AI can create new revenue streams for all agents of the music industry, today: music labels win a piece of the the market with differentiated functional content built from their catalog; artists expand their music universe, connect with their audience in new and meaningful ways, and extend the shelf life to their material; DSPs get ample, quality-controlled content that increases engagement. Once listeners find sounds that achieve their goals, they often stick with them. For example, Wind Down, James Blake’s sleep soundscape album, shows a 50% listener retention in its seventh month after release. This shows that, when done right, functional sound has an incredibly long shelf life.
This win-win-win future is already here. By combining art, generative AI technology and science, plus business structures that enable such deals, we can transform amazing artist-driven sounds into healing soundscapes that listeners crave. In an age that yearns for calm, clarity, and better mental health, we can utilize AI to create new music formats that rights holders can embrace and listeners can appreciate. It promises AI-powered music that not only sounds good, but improves people’s lives, and supports artists. This is how you ride the functional music wave and create something listeners will find real value in and keep coming back to. Do not be afraid. Work with us. Embrace the future.
Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.
In the recent article “What Happens To Songwriters When AI Can Generate Music,” Alex Mitchell offers a rosy view of a future of AI-composed music coexisting in perfect barbershop harmony with human creators — but there is a conflict of interest here, as Mitchell is the CEO of an app that does precisely that. It’s almost like cigarette companies in the 1920s saying cigarettes are good for you.
Yes, the honeymoon of new possibilities is sexy, but let’s not pretend this is benefiting the human artist as much as corporate clients who’d rather pull a slot machine lever to generate a jingle than hire a human.
While I agree there are parallels between the invention of the synthesizer and AI, there are stark differences, too. The debut of the theremin — the first electronic instrument — playing the part of a lead violin in an orchestra was scandalous and fear-evoking. Audiences hated its sinusoidal wave lack of nuance, and some claimed it was “the end of music.” That seems ludicrous and pearl-clutching now, and I worship the chapter of electrified instruments afterward (thank you sister Rosetta Tharpe and Chuck Berry), but in a way, they were right. It was the closing of a chapter, and the birth of something new.
Is new always better, though? Or is there a sweet spot ratio of machine to human? I often wonder this sitting in my half analog, half digital studio, as the stakes get ever higher from flirting with the event horizon of technology.
In this same article, Diaa El All (another CEO of an A.I. music generation app), claims that drummers were pointlessly scared of the drum machine and sample banks replacing their jobs because it’s all just another fabulous tool. (Guess he hasn’t been to many shows where singers perform with just a laptop.) Since I have spent an indecent portion of my modeling money collecting vintage drum machines (cuz yes, they’re fabulous), I can attest to the fact I do indeed hire fewer drummers. In fact, since I started using sample libraries, I hire fewer musicians altogether. While this is a great convenience for me, the average upright bassist who used to be able to support his family with his trade now has to remain childless or take two other jobs.
Should we halt progress for maintaining placebo usefulness for obsolete craftsmen? No, change and competition are good, if not inevitable ergonomics. But let’s not be naive about the casualties.
The gun and the samurai come to mind. For centuries, samurai were part of an elite warrior class who rigorously trained in kendo (the way of the sword) and bushido (a moral code of honor and indifference to pain) since childhood. As a result, winning wars was a meritocracy of skill and strategy. Then a Chinese ship with Portuguese sailors showed up with guns.
When feudal lord Nobunaga saw the potential in these contraptions, he ordered hundreds be made for his troops. Suddenly a farmer boy with no skill could take down an archer or swordsman who had trained for years. Once more coordinated marching and reloading formations were developed, it was an entirely new power dynamic.
During the economic crunch of the Napoleonic wars, a similar tidal shift occurred. Automated textile equipment allowed factory owners to replace loyal employees with machines and fewer, cheaper, less skilled workers to oversee them. As a result of jobless destitution, there was a region-wide rebellion of weavers and Luddites burning mills, stocking frames and lace-making machines, until the army executed them and held show trials to deter others from acts of “industrial sabotage.”
The poet Lord Byron opposed this new legislation, which called machine-breaking a capital crime — ironic considering his daughter, Ada Lovelace, would go on to invent computers with Charles Babbage. Oh, the tangled neural networks we weave.
Look what Netflix did to Blockbuster rentals. Or what Napster did to the recording artist. Even what the democratization of homemade porn streaming did to the porn industry. More recently, video games have usurped films. You cannot add something to an ecosystem without subtracting something else. It would be like smartphone companies telling fax machine manufacturers not to worry. Only this time, the fax machines are humans.
Later in the article, Mac Boucher (creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes) adds another glowing review of bot- and button-based composition: “We will all become creators now.”
If everyone is a creator, is anyone really a creator?
An eerie vision comes to mind of a million TikTokers dressed as opera singers on stage, standing on the blueish corpses of an orchestra pit, singing over each other in a vainglorious cacophony, while not a single person sits in the audience. Just rows of empty seats reverberating the pink noise of digital narcissism back at them. Silent disco meets the Star Gate sequence’s death choir stack.
While this might sound like the bitter gatekeeping of a tape machine purist (only slightly), now might be a good time to admit I was one of the early projects to incorporate AI-generated lyrics and imagery. My band, Uni and The Urchins, has a morbid fascination with futurism and the wild west of Web 3.0. Who doesn’t love robots?
But I do think in order to make art, the “obstacles” actually served as a filtration device. Think Campbell’s hero’s journey. The learning curve of mastering an instrument, the physical adventure of discovering new music at a record shop or befriending the cool older guy to get his Sharpie-graffitied mix CD, saving up to buy your first guitar, enduring ridicule, the irrational desire to pursue music against the odds (James Brown didn’t own a pair of shoes until he 8 years old, and now is canonized as King.)
Meanwhile, in 2022, surveys show that many kids feel valueless unless they’re an influencer or “artist,” so the urge toward content creation over craft has become criminally easy, flooding the markets with more karaoke, pantomime and metric-based mush, rooted in no authentic movement. (I guess Twee capitalist-core is a culture, but not compared to the Vietnam war, slavery, the space race, the invention of LSD, the discovery of the subconscious, Indian gurus, the sexual revolution or the ’90s heroin epidemic all inspiring new genres.)
Not to sound like Ted Kaczynski’s manifesto, but technology is increasingly the hand inside the sock puppet, not the other way around.
Do I think AI will replace a lot of jobs? Yes, though not immediately, it’s still crude. Do I think this upending is a net loss? In the long term, no, it could incentivize us to invent entirely new skills to front-run it. (Remember when “learn to code” was an offensive meme?) In fact, I’m very eager to see how we co-evolve or eventually merge into a transhuman cyber Seraphim, once Artificial General Intelligence goes quantum.
But this will be a Faustian trade, have no illusions.
Charlotte Kemp Muhl is the bassist for NYC art-rock band UNI and the Urchins. She has directed all of UNI and The Urchins’ videos and mini-films and engineered, mixed and mastered their upcoming debut album Simulator (out Jan. 13, 2023, on Chimera Music) herself. UNI and the Urchins’ AI-written song/AI-made video for “Simulator” is out now.
If you think 100,000 songs a day going into the market is a big number, “you have no idea what’s coming next,” says Alex Mitchell, founder/CEO of Boomy, a music creation platform that can compose an instrumental at the click of an icon.
Boomy is one of many so-called “generative artificial intelligence” music companies — others include Soundful, BandLab’s SongStarter and Authentic Artists — founded to democratize songwriting and production even more than the synthesizer did in the 1970s, the drum machine in the ’80s and ’90s, digital audio workstations in the 2000s and sample and beat libraries in the 2010s.
In each of those cases, however, trained musicians were required to operate this technology in order to produce songs. The selling point of generative AI is that no musical knowledge or training is necessary. Anyone can potentially create a hit song with the help of computers that evolve with each artificially produced guitar lick or drumbeat.
Not surprisingly, the technology breakthrough has also generated considerable anxiety among professional musicians, producers, engineers and others in the recorded-music industry who worry that their livelihoods could potentially be threatened.
“In our pursuit of the next best technology, we don’t think enough about the impact [generative AI] could have on real people,” says Abe Batshon, CEO of BeatStars, a subscription-based platform that licenses beats. “Are we really helping musicians create, or are we just cutting out jobs for producers?”
Not so, say the entrepreneurs who work in the emerging business. From their perspective, generative AI tools are simply the next step in technology’s long legacy of shaping the way music is created and recorded.
“When the drum machine came out, drummers were scared it would take their jobs away,” says Diaa El All, founder/CEO of Soundful, another AI music-generation application that was tested by hit-makers such as Caroline Pennell, Madison Love and Matthew Koma at a recent songwriting camp in Los Angeles. “But then they saw what Prince and others were able to create with it.”
El All says the music that Soundful can instantly generate, based on user-set parameters, like beats per minute or genre, is simply meant to be a “jumping-off point” for writers to build songs. “The human element,” he says, “will never be replaced.”
BandLab CEO Meng Ru Kuok says that having tools to spark song creation makes a huge difference for young music-makers, who, so far, seem to be the biggest adopters of this technology. Meng claims his AI-powered SongStarter tool, which generates a simple musical loop over which creators can fashion a song, makes new BandLab users “80% more likely to actually share their music as opposed to writing from zero.” (Billboard and BandLab collaborated on Bringing BandLab to Billboard, a portal that highlights emerging artists.)
Other applications for generative AI include creating “entirely new formats for listening,” as Endel co-founder/CEO Oleg Stavitsky says. This includes personalized music for gaming, wellness and soundtracks. Lifescore modulates human-made scores in real time, which can reflect how a player is faring in a video game, for example; Endel generates soundscapes, based on user biometrics, to promote sleep, focus or other states (Lifescore also has a similar wellness application); and Tuney targets creators who need dynamic, personalized background music for videos or podcasts but do not have a budget for licensing.
These entrepreneurs contend that generative AI will empower the growth of the “creator economy,” which is already worth over $100 billion and counting, according to influencer Marketing Hub. “We’re seeing the blur of the line between creator and consumer, audience and performer,” says Mitchell. “It’s a new creative class.”
In the future Mitchell and El All both seem to imagine, every person can have the ability to create songs, much like the average iPhone user already has the ability to capture high-quality photos or videos on the fly. It doesn’t mean everyone will be a professional, but it could become a similarly common pastime.
The public’s fascination with — and fear of — generative AI reached a new milestone this year with the introduction of DALL-E 2, a generator that instantaneously creates images based on text inputs and with a surprising level of precision.
Musician Holly Herndon, who has used AI tools in her songwriting and creative direction for years, says that in the next decade, it will be as easy to generate a great song as it is to generate an image. “The entertainment industries we are familiar with will change radically when media is so easy and abundant,” she says. “The impact is going to be dramatic and very alien to what we are used to.”
Mac Boucher, creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes, agrees. “We will all become creators and be capable of creating anything.”
If these predictions are fulfilled, the music business, which is already grappling with oversaturation, will need to recalibrate. Instead of focusing on consumption and owning intellectual property, more companies may shift to artist services and the development of tools that aid song creation — similar to Downtown Music Holdings’ decision to sell off its 145,000-song catalog over the last two years and focus on serving the needs of independent talent.
Major music companies are also investing in and establishing relationships with AI startups. Hipgnosis, Reservoir, Concord and Primary Wave are among those that have worked with AI stem separation company Audioshake, while Warner Music Group has invested in Boomy, Authentic Artists and Lifescore.
The advancement of AI-generated music has understandably sparked a debate over its ethical and legal use. Currently, the U.S. Copyright Office will not register a work created solely by AI, but it will register works created with human input. However, what constitutes that input has yet to be clearly defined.
Answers to these questions are being worked out in court. In 2019, industry leader Open AI issued a comment to the U.S. Patent and Trademark Office, arguing that using copyrighted material for training an AI program should be considered fair use, although many copyright owners and some other AI companies disagree.
Now one of Open AI’s projects, which was made in collaboration with Microsoft and Github, is battling a class-action suit over a similar issue. Copilot, which is AI designed to generate computer code, was accused of often replicating copyrighted code because it was trained on billions of lines of protected material made by human developers.
The executives interviewed for this story say they hire musicians to create training material for their programs and do not touch copyright-protected songs.
“I don’t think songwriters and producers are talking about [AI] enough,” says music attorney Karl Fowlkes. “This kind of feels like a dark, impending thing coming our way, and we need to sort out the legal questions.”
Fowlkes says the most important challenge to AI-generated music will come when these tools begin creating songs that emulate specific musicians, much like DALL-E 2 can generate images clearly inspired by copyright works from talents like Andy Warhol or Jean Michel Basquiat.
Mitchell says that Boomy may cross that threshold in the next year. “I don’t think it would be crazy to say that if we can line up the right framework to pay for the rights [to copyrighted music], to see something from us sooner than people might think on that front,” he says. “we’re looking at what it’s going to take to produce at the level of DALL-E 2 for music.”
-
Pages