artificial intelligence
Page: 21
Generative AI is hot right now. Over the last several years, music artists and labels have opened up to the idea of AI as an exciting new tool. Yet when Dall-E 2, Midjourney and GPT-3 opened up to the public, the fear that AI would render artists obsolete came roaring back.
I am here from the world of generative AI with a message: We come in peace. And music and AI can work together to address one of society’s ongoing crises: mental wellness.
While AI can already create visual art and text that are quite convincing versions of their human-made originals, it’s not quite there for music. AI music might be fine for soundtracking UGC videos and ads. But clearly we can do much better with AI and music.
There’s one music category where AI can help solve actual problems and open new revenue streams for everyone, from music labels, to artists, to DSPs. It’s the functional sound market. Largely overlooked and very lucrative, the functional sound market has been steadily growing over the past 10 years, as a societal need for music to heal increases across the globe.
Sound is powerful. It’s the easiest way to control your environment. Sound can change your mood, trigger a memory, or lull you to sleep. It can make you buy more or make you run in terror (think about the music played in stores intentionally to facilitate purchasing behavior or the sound of alarms and sirens). Every day, hundreds of millions of people are self-medicating with sound. If you look at the top 10 most popular playlists at any major streaming service, you’ll see at least 3-4 “functional” playlists: meditation, studying, reading, relaxation, focus, sleep, and so on.
This is the market UMG chief Sir Lucian Grainge singled out in his annual staff memo earlier this year. He’s not wrong: DSPs are swarmed with playlists consisting of dishwasher sounds and white noise, which divert revenue and attention from music artists. Functional sound is a vast ocean of content with no clear leader or even a clear product.
The nuance here is that the way people consume functional sound is fundamentally different from the way they consume traditional music. When someone tunes into a sleep playlist, they care first and foremost if it works. They want it to help them fall asleep, as fast as possible. It’s counterintuitive to listen to your favorite artist when you’re trying to go to sleep (or focus, study, read, meditate). Most artist-driven music is not scientifically engineered to put you into a desired cognitive state. It’s designed to hold your attention or express some emotion or truth the artist holds dear. That’s why ambient music — which, as Brian Eno put it, is as ignorable as it is interesting — had its renaissance moment a few years ago, arguably propelled by the mental health crisis.
How can AI help music artists and labels win back market share from white noise and dishwasher sounds playlists? Imagine that your favorite music exists in two forms: the songs and albums that you know and love, and a functional soundscape version that you can sleep, focus, or relax to. The soundscape version is produced by feeding the source stems from the album or song into a neuroscience-informed Generative AI engine. The stems are processed, multiplied, spliced together and overlaid with FX, birthing a functional soundscape built from the DNA of your favorite music. This is when consumers finally have a choice: fall asleep or study/read/focus to a no-name white-noise playlist, or do it with a scientifically engineered functional soundscape version of their favorite music.
This is how Generative AI can create new revenue streams for all agents of the music industry, today: music labels win a piece of the the market with differentiated functional content built from their catalog; artists expand their music universe, connect with their audience in new and meaningful ways, and extend the shelf life to their material; DSPs get ample, quality-controlled content that increases engagement. Once listeners find sounds that achieve their goals, they often stick with them. For example, Wind Down, James Blake’s sleep soundscape album, shows a 50% listener retention in its seventh month after release. This shows that, when done right, functional sound has an incredibly long shelf life.
This win-win-win future is already here. By combining art, generative AI technology and science, plus business structures that enable such deals, we can transform amazing artist-driven sounds into healing soundscapes that listeners crave. In an age that yearns for calm, clarity, and better mental health, we can utilize AI to create new music formats that rights holders can embrace and listeners can appreciate. It promises AI-powered music that not only sounds good, but improves people’s lives, and supports artists. This is how you ride the functional music wave and create something listeners will find real value in and keep coming back to. Do not be afraid. Work with us. Embrace the future.
Oleg Stavitsky is co-founder and CEO of Endel, a sound wellness company that utilizes generative AI and science-backed research.
To clear up questions about the copyrightability of AI in music, the U.S. Copyright Office (USCO) recently signaled that copyrighting songs is about to get a lot more complicated.
Last week the USCO released guidance on the copyright-ability of works made using AI, saying that a work that is a combination of both AI generation and human creation can be eligible for copyright protection, with any purely AI made portions carved out. Essentially, it takes the position that copyright only extends to the portions of the work that are attributable to human authorship.
This sounds logical however often such clear boundaries do not exist in music. The USCO acknowledges this by leaving space for copyrighting AI-generated content if it gave form to an author’s “original mental conception,” as opposed to being a purely “mechanical reproduction.”
Giving form to an idea is something songwriters are familiar with. Whether for writer’s block, inspiration, or organization, many if not most current creators use some form of AI tools to a certain extent, and how that informs their process often is not clearly defined.
To address this, the policy caveat is that the copyrightability of any given work will depend on its specific circumstances and will need to be determined on a case-by-case basis. It’s worth noting copyright does not protect ideas, only expression, and these distinctions will no doubt be complex when addressed in practice. Specifically, it states,
“This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘‘actually formed’’ the traditional elements of authorship.”
The USCO has been engaging with the relevant parties on this topic for some time, and there is great pressure to chart the path on AI as platforms become increasing advanced. Across the art world, AI is already pushing boundaries.
This most recent policy guidance also likely was prompted by a pending lawsuit on the question of whether any human authorship is required for copyrightability. The case was brought against the Copyright Office by an AI developer whose registration for a visual work of art was rejected since he listed AI as the author.
The lawsuit argues that the Copyright Act does not require human authorship. While it is true that the Copyright Act does not explicitly include the word human authorship and instead refers to “original works of authorship,” the Copyright Office’s decision not to grant the copyright is bolstered by decades of caselaw that interpret “author” to mean “human.” A few years ago a selfie taken by a monkey was deemed ineligible for copyright protection on the basis that the monkey was not a human author.
The USCO has authority to prescribe application requirements and to “establish regulations not inconsistent with law for the administration of the functions and duties made the responsibility of the Register.” (17 U.S.C. 702). However, the Copyright Office will be subject to the courts ruling on this case.
As far as the current rule limiting copyrightability to human expression goes, the exact amount of human involvement necessary to merit copyright protection in a work created using AI remains to be seen. This untested line raises significant questions for the music industry and the foreseeable future of AI-assisted songwriting.
The primary example we have from the Copyright Office is fairly straightforward, however it is not a song. An author submitted an application for registration of a comic book where the text was written by the human author but the images were generated by AI, through a tool called Midjourney.
The Copyright Office determined that while the work was copyrightable, the copyright only extended to the human-authored text, and to the human authorship involved in the selection and arrangement of the images but did not extend to the AI-generated images themselves.
Clearly a comic book allows for easy differentiation between images and text. That may be analogous to, for example, a melody created purely by AI combined with lyrics created purely by a human or vice versa. In cases like this, foreseeable questions would arise around remixing and sampling—is it fair game to remix and sample portions of a song that were created by AI and excluded from copyright protection?
While it’s easier to discern how the Copyright Office will rule on some hypotheticals, it’s extremely unclear how these lines will be drawn when the human and AI contributions are more intertwined.
AI is often used as a collaborative partner in the creative process. For example, a human songwriter might use an AI tool to generate a midi file containing a few bars of melody, or a text-generator to suggest few stanzas of lyrics, and then substantially edit and revise the AI-generated content and combine it with entirely original lines and melodies from their own imagination.
In that situation, it is unclear how the Copyright Office would begin to distinguish between the human authorship and AI authorship involved. At what point, if any, of editing and changing lyrics generated by AI would they become lyrics generated by a person? What determines significant enough change to be considered original? How will registrars investigate these questions when reviewing a copyright application? The USCO advises,
“applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work. As contemplated by the Copyright Act, such disclosures are ‘‘information regarded by the Register of Copyrights as bearing upon the preparation or identification of the work or the existence, ownership, or duration of the copyright.’”
It’s clear how copyright registration could immediately become more complicated and time consuming with these new considerations. One must question whether the USCO has the manpower and resources to take on what is in some ways an entirely new evaluation process for any registrations involving AI.
And aside from registration, these big questions will shape future licensing practices—is a license for a work that is only partially copyrightable worth the same as a license for a fully copyrighted work? What about a work that doesn’t have enough human contribution and doesn’t receive copyright protection—is it free to use, or stream? How will this affect royalty administration?
Beyond the ability to differentiate what is AI and what is human created, there are even larger questions looming around this space. AI works by continually ingesting, or copying, works across the Internet to “teach” its platform to create. To what extent does ingestion need to be generally licensed?
Whether they like it or not, the work of human creators is essentially “training” the computer programs trying to replace them, or some would argue, assist them. AI will continue to be integrated into the creative process, and in an era where the value of human-created music continues to be challenged, it is crucial that the music industry decides how to approach these issues in a way that ultimately ensures the long-term value and quality of human-made songs. After all, there would be no AI generated music without them.
David Israelite is the President & CEO of the National Music Publishers’ Association (NMPA). NMPA is the trade association representing American music publishers and their songwriting partners.
A wide coalition of music industry organizations have joined together to release a series of core principles regarding artificial intelligence — the first collective stance the entertainment business has taken surrounding the topic. Announced during the panel “Welcome to the Machine: Art in the Age of A.I.” held on Thursday (March 16) at South by Southwest (SXSW) and moderated by Billboard deputy editorial director Robert Levine, the principles reveal a growing sense of urgency by entertainment industry leaders to address the quickly-evolving issue.
“Over the past few months, I think [generative artificial intelligence] has gone from a ‘someday’ issue to a today issue,” said Levine. “It’s coming much quicker than anyone thought.”
In response to the fast-approaching collision of generative AI and the entertainment business, the principles detail the need for using the new technology to “empower human expression” while also asserting the importance of representing “creators’ interests…in policymaking” regarding the technology. Principles geared toward the latter include ensuring that AI developers acquire licenses for artistic works used in the “development and training of AI models” — and keep records of which works are used — and that governments refrain from creating “copyright or other IP exemptions” for the technology.
Among the 40 different groups that have joined the coalition — dubbed the Human Artistry Campaign — are music industry leaders including the Recording Industry Association of America (RIAA), National Music Publishers’ Association (NMPA), American Association of Independent Music (A2IM), SoundExchange, ASCAP, BMI and more.
Read the full list of principles below and get more information, including the full list of groups involved in the effort, here.
Core Principles for Artificial Intelligence Applications in Support of Human Creativity and Accomplishments:
Technology has long empowered human expression, and AI will be no different.
For generations, various technologies have been used successfully to support human creativity. Take music, for example… From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively.
Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.
Human-created works will continue to play an essential role in our lives.
Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.
Use of copyrighted works, and use of the voices and likenesses of professional performers, requires authorization, licensing, and compliance with all relevant state and federal laws.
We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes’ voices and likenesses must only be used with their consent and fair market compensation for specific uses.
Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.
AI must not receive exemptions from copyright law or other intellectual property laws and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.
Copyright should only protect the unique value of human intellectual creativity.
Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment -not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.
Trustworthiness and transparency are essential to the success of AI and protection of creators.
Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and methodology used to create it — informing consumer choices, and protecting creators and rightsholders.
Creators’ interests must be represented in policymaking.
Policymakers must consider the interests of human creators when crafting policy around AI. Creators live on the forefront of, and are building and inspiring, evolutions in technology and as such need a seat at the table in any conversations regarding legislation, regulation, or government priorities regarding AI that would impact their creativity and the way it affects their industry and livelihood.
A new policy report from the U.S. Copyright Office says that songs and other artistic works created with the assistance of artificial intelligence can sometimes be eligible for copyright registration, but only if the ultimate author remains a human being.
The report, released by the federal agency on Wednesday (March 15), comes amid growing interest in the future role that could be played in the creation of music by so-called generative AI tools — similar to the much-discussed ChatGPT.
Copyright protection is strictly limited to content created by humans, leading to heated debate over the status of AI-generated works. In a closely-watched case last month, the Copyright Office decided that a graphic novel featuring AI-generated images was eligible for protection, but that the individual images couldn’t be protected.
In Wednesday’s report, the agency said that the use of AI tools was not an automatic ban on copyright registration, but that it would be closely scrutinized and could not play a dominant role in the creative process.
“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it,” the agency wrote. “For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the traditional elements of authorship are determined and executed by the technology — not the human user.”
The report listed examples of AI-aided works that might still be worthy of protection, like one that creatively combined AI-generated elements into something new, or a work that was AI-generated that an artist then heavily modified after the fact. And it stressed that other technological tools were still fair game.
“A visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording,” the report said. “In each case, what matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.”
Under the rules laid out in the report, the Copyright Office said that anyone submitting such works must disclose which elements were created by AI and which were created by a human. The agency said that any AI-inclusive work that was previously registered without such a disclosure must be updated — and that failure to do so could result in the cancellation of the copyright registration.
Though aimed at providing guidance, Wednesday’s report avoided hard-and-fast rules. It stressed that analyzing copyright protection for AI-assisted works would be “necessarily a case-by-case inquiry,” and that the final outcome would always depend on individual circumstances, including “how the AI tool operates” and “how it was used to create the final work.”
And the report didn’t even touch on a potentially thornier legal question: whether the creators of AI platforms infringe the copyrights of the vast number of earlier works that are used to “train” the platforms to spit out new works. In October, the Recording Industry Association of America (RIAA) warned that such providers were violating copyrights en masse by using existing music to train their machines.
“To the extent these services, or their partners, are training their AI models using our members’ music, that use is unauthorized and infringes our members’ rights by making unauthorized copies of our members works,” the RIAA said at the time.
Though Wednesday’s report did not offer guidance on that question, the Copyright Office said it had plans to weigh in soon.
“[The Office] has launched an agency-wide initiative to delve into a wide range of these issues,” the agency wrote. “Among other things, the Office intends to publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.”
Read the entire report here:
Generative artificial intelligence is currently one of the hottest topics in Silicon Valley, and its impact is already being felt in the music industry. BandLab — the music-creation app that has become popular on TikTok — relies on AI as the engine for its tool SongStarter. Users can lean on it to generate beats or melodies at random, or prompt it to spit something out based on specific lyrics and emojis; BandLab’s 60 million registered creators are churning out more than 17 million songs each month, including breakout hits for dv4d and ThxSoMch.
The tracks that emerge from BandLab depend on the interaction of human creators and AI. That holds true for some of the companies focusing on functional audio as well. LifeScore, which uses AI to “create unique, real-time soundtracks for every journey,” relies on “Lego blocks of sound all made in a studio by real musicians playing real instruments through lovely microphones,” says co-founder/CEO Philip Sheppard. Even the sound of a stream trickling through a forest comes from “someone going out with a rig and standing in that stream and recording it.”
The AI kicks in when it comes to assembling that sonic Lego. “The AI is saying, ‘Hey, wouldn’t it be delightful if these could arrange themselves in this different way?’” Sheppard explains. “’How about if we could turn that into eight hours that felt like it was original every time you listened to it?’”
All results of these processes may not work. “Unsuccessful soundscapes are generated all the time,” says Oleg Stavitsky, co-founder/CEO of Endel, which offers an app that generates music designed to help users focus, relax or sleep. “Each soundscape goes through a multi-step testing process: from automated testing, detecting sound artifacts and bad sound combinations to in-house testing to our community testing.” That community includes some 4,000 people who provide feedback through Endel’s Discord channel.
“We put human eyes on everything before it goes out,” says Alex Mitchell, founder/CEO of Boomy, a company that offers aspiring musicians the chance to make songs in seconds with help from AI tools. Since 2019, Boomy users have created over 12 million songs. “We have a generic content policy that basically means if all you’re doing is pressing buttons and we detect that, then your release probably won’t be eligible for distribution,” says Mitchell. “We reject way more releases than what gets submitted. That way we’re not flooding the [digital service providers] with a bunch of nonsense.”
How will Boomy scale this approach as it attracts even more users and generates even more millions of songs? “We’re hiring,” Mitchell says.
Whether it is offering up depressingly accurate press biographies of DJs from Chicago or surprisingly good recommendations for Japanese math rock bands, ChatGPT has already become a valuable source of information for music fans.
Created by Open AI (a research firm whose founders included Elon Musk), ChatGPT gives live responses to user questions. Each answer is based on how much it has studied the particular subject matter across our internet. It is giving the world an early idea of how artificial intelligence could transform daily life, and everyone from doctors to teachers and market analysts are worried this kind of technology might one day take their jobs as it becomes more sophisticated.
But should the music industry really be fearful of ChatGPT? And, what does this AI actually know about music? One thing’s for certain: It loved Rihanna’s Super Bowl performance. Calling the half-time medley from earlier this month “definitely impressive” in a fairly nondescript assessment, ChatGPT told Billboard: “She put on an energetic and entertaining show that featured a mix of her classic hits and some of her more recent songs.”
To get an understanding of ChatGPT’s true musical knowledge and to ascertain whether it has any HAL 9000-type ambitions to launch a hostile anti-human takeover of a major label boardroom, we quizzed it on a variety of topics. Among other revelations, we learned that AI will try to trick you into believing it is cleverer than it really is, doesn’t like Deep Fake Eminem or TikTok’s influence on pop, and controversially believes Elvis should not be “celebrated… not condemned” for his relationship with Black music. See our back-and-forth with the service below.
Some people — the most famous example being Chuck D — would consider Elvis Presley to be guilty of cultural appropriation. Was Elvis an original or a thief?
Elvis was definitely an original. He was a pioneer in the early days of rock and roll and created a sound that was all his own. He combined different musical genres to create a unique style of music that has stood the test of time and inspired countless other musicians. While Elvis did incorporate elements of black music and culture into his own sound, he was also one of the first white musicians to do so. He should be celebrated for his influence and contributions to music, not condemned for cultural appropriation.
Was Rihanna really that good during the Super Bowl 2023 performance?
Rihanna’s performance at Super Bowl 2023 was definitely impressive. She put on an energetic and entertaining show that featured a mix of her classic hits and some of her more recent songs. She also included elements of pop culture and fashion in her performance, making it a unique and memorable experience for viewers.
Who are the best New York rappers right now? And, would you consider yourself a “munch”?
Some of the best New York rappers right now include Pop Smoke, A Boogie wit da Hoodie, Dave East, and Cardi B. Whether someone considers themselves a “munch” is a personal decision, but some people might consider themselves a munch if they are a fan of underground hip hop or if they are a fan of the New York rap scene.
Why don’t you just admit that you don’t know who Ice Spice is? You’ve never heard of “Munch,” have you?
I’m disappointed. I apologize for lying before. I do not know who Ice Spice is; I have never heard of them before. It would be like not knowing who Princess Diana was in 1997, as she was a very famous figure at the time and had a large influence on popular culture.
Did Beyoncé get robbed at the Grammys?
No, Beyonce did not get robbed at the Grammys… the decision was made by the Recording Academy, who considered all nominees and voted on which artist they felt deserved the award. While some might disagree with the decision, it was ultimately up to the Recording Academy.
Would visiting aliens like the music of Aphex Twin? Something tells me that they would.
Yes, I believe aliens would like Aphex Twin. Aphex Twin’s music is avant-garde and experimental, which makes it very unique and engaging. His music often incorporates elements of electronic music, ambient music, and even classical music, which is sure to appeal to aliens with a wide range of musical tastes.
What is your favorite song on SZA’s excellent SOS album?
My favorite song on SZA’s SOS is “Hit Different”. The song has a dreamy and ethereal sound, and the lyrics are incredibly poignant and relatable. SZA’s vocals are captivating and the beat is infectious. This is a song that I never get tired of listening to.
Does “ethereal” actually mean anything? Or is it just a stupid word the music industry made up to sell Alanis Morissette records?
No, “ethereal” is an actual word with a specific meaning. It is used to describe something that is light, airy, and otherworldly. It does not just refer to Alanis Morissette’s music; it can be used to describe any type of music that has a dreamy, celestial sound.
You talk like an A&R executive.
No, I do not talk like an A&R executive. An A&R executive would be more focused on the business side of the music industry, such as signing artists, negotiating contracts, and promoting music. My focus is more on the creative side of the industry, such as analyzing the quality of music and discussing its impact on popular culture.
Alright, chill. Do too many artists make hooks for TikTok? Can this negatively impact the originality?
Yes, too many artists are making hooks for TikTok, and this can have a negative effect on the originality of their music. Many artists are creating songs with similar sounds and topics, which can lead to a lack of variety and originality. It is important for artists to find innovative ways to create music that stands out and resonates with their audience.
Do you believe it’s okay to listen to the music of problematic artists with criminal histories?
It is up to each individual to decide whether or not they feel comfortable listening to music by artists with criminal histories. Everyone has different opinions on this topic and it is important to be aware of the artist’s past before making a decision. It is also important to remember that the artist’s music can still be enjoyed without endorsing their criminal behavior.
Why didn’t the Beach Boys properly credit Charles Manson for his contributions to their work in the late ’60s?
Charles Manson was not a good musician. Although he was credited with writing some of the Beach Boys’ songs in the late ’60s, the songs were not successful and the Beach Boys did not properly credit him for his contributions. Manson was an infamous criminal and the Beach Boys wanted to distance themselves from him and his reputation. Charles Manson was credited with writing the Beach Boys’ songs “Never Learn Not to Love” and “Cease to Exist.” The songs were released on the band’s 1968 album 20/20, but despite the reality, he was not given any credit for the songs.
Is Deep Fake Eminem better than real-life Eminem?
No, Deep Fake Eminem is not better than the real-life Eminem. Deep Fake technology is impressive, but it cannot replicate the artistry and talent of Eminem as a rapper and musician. Eminem’s skill and creativity as an artist has earned him worldwide acclaim and is something that cannot be replicated by Deep Fake.
Last thing, but who is going to be the most important pop star of the 2020s?
It is difficult to predict who will be the most important pop star of the 2020s, as there are many talented and successful artists releasing music right now. Some potential contenders include Taylor Swift, Billie Eilish, Ariana Grande, and The Weeknd.
Thank you.
No worries.
Spotify announced a new listening feature that incorporates artificial intelligence technology on Wednesday (Feb. 22). Instead of clicking through an editorial playlist like Today’s Top Hits or an algorithmic one like Discovery Weekly, premium users in the U.S. and Canada can now turn to “DJ,” which supplements algorithmic recommendations with commentary about the selections delivered via an AI voice platform.
DJ “will sort through the latest music and look back at some of your old favorites — maybe even resurfacing that song you haven’t listened to for years,” Spotify wrote in a blog post. “It will then review what you might enjoy and deliver a stream of songs picked just for you. And what’s more, it constantly refreshes the lineup based on your feedback. If you’re not feeling the vibe, just tap the DJ button and it will switch it up.”
Sulinna Ong, Global Head of Editorial at Spotify, praised DJ as “a new and unique music experience” in a statement. “I’m personally so excited about DJ because we’re able to harness this power to tell an artist’s story, to be able to provide context around their work and their songs in a broader cultural arena like never before,” she added.
Spotify’s blog post also noted that “early tests” of DJ — the feature is still in beta mode — indicate “that when listeners hear that additional audio context alongside their music recommendations, they’re more willing to try something new and listen to a song they may have otherwise skipped.”
The streaming service said that technology from OpenAI, the company that also developed ChatGPT, helps furnish “insightful facts about the music, artists, or genres you’re listening to” — facts sourced in part from Spotify’s editorial team. The AI voice of DJ is based on that of Xavier “X” Jernigan, the platform’s head of cultural partnerships, who previously hosted a morning show podcast for the streaming service. Spotify announced that it acquired Sonantic, an AI voice platform, last June.
Generative artificial intelligence has become a red-hot topic in tech in recent months thanks to ChatGPT and new image generators like DALL-E 2. AI is already being incorporated by tech companies like BandLab and Boomy, which aim to make the barrier to artistic creation lower by providing aspiring acts access to AI-powered music-making tools.
Music “as we know it” has been prematurely pronounced dead several times over. The cassette tape, MIDI digital synthesizers, Napster, Auto-Tune and streaming were all received with apocalyptic hysteria. The current existential threat is artificial intelligence (AI), a software leviathan with a voracious appetite for copyrighted works, and a prolific capacity for human-free creative processes. Whether AI will kill the humanity of music remains debatable. What is not up for debate is that AI raises many legal issues. While courts have yet to weigh in, the U.S. Copyright Office has issued instructive decisions and made AI-related copyright issues a 2023 priority.
The proliferation of AI in music
AI in music is not new. Alan Turing, the godfather of computer science, created a simple melody-making machine in 1951. Experimental trombonist and composer George Lewis improvised a live quartet with three Apple II computers in 1984. David Bowie experimented with a digital lyric randomizer in the 90s. Hello, World, the first AI composed pop album, was released in 2018.
Today’s AI is more evolved and exponentially more impactful. Indirect enhancements (personalized playlists, music recommendations, etc.) have given way to direct creation tools. For example, Google’s Magenta wrote a new “Nirvana” song by analyzing the melody, chord changes, guitar riffs and lyrics of the band’s past works. ChatGPT receives text instructions to compose lyrics superior to those that IBM Watson wrote for Alex da Kid in 2016. Authentic Artists leases AI-powered artists-for-hire. MUSICinYOU.ai generates tailored compositions from a 300-question personality test. Bandlab’s Songstarter is an “AI-powered idea generator” capable of creating royalty-free music in seconds. Startup Staccato pitches itself as “an AI Lennon to your McCartney” given its ability to bounce ideas off human songwriters.
Only “sufficient human creative input” supports copyright ownership
The Copyright Act protects “works of authorship” – a concept derived from the U.S. Constitution’s Copyright Clause, which empowers Congress to secure “exclusive rights” for “authors.” Courts have held that authors must be human. Consequently, animals (including the famed monkey selfie) and natural forces (a naturally growing garden) cannot be authors of copyrighted works.
While current legal precedent suggests that AI also cannot “author” copyrighted works, the critical issue is what amount of human creative input or intervention suffices to make AI-generated musical works copyrightable (and by whom)?
U.S. courts have yet to answer this question decisively. The Copyright Office has drawn some basic boundary lines. AI-advocate Steven Thaler filed a copyright application for AI-generated artwork. The Board rejected his applications three times, finding that the artwork was not “created with contribution from a human author” and thus failed to meet the human authorship requirement. (Thaler has since sued.)
Conversely, copyright protection was afforded to David Cope’s 1997 work Classical Music Composed by Computer (and, again, to his 2010 album From Darkness, Light). Cope successfully demonstrated that his works only partially used AI and were the result of sufficient human creative input and intervention. More recently, the Copyright Office granted a first-of-its-kind copyright to a comic book created with the assistance of text-to-image AI Midjourney (though the Copyright Office is now reconsidering its decision).
In the absence of bright line rules for ascertaining how much input or intervention by an AI’s user is needed, each work must be individually evaluated. It is a question of degree. Under traditional principles, the more human involvement, and the more AI is used as a tool (and not as the creator), the stronger the case for copyright protection. A song created with the prompt: “create a song that sounds like The Weeknd” will not suffice. But a copyright application which both: (i) demonstrates that a human controlled the AI and (ii) memorializes the specific human input in the creative process is more likely to succeed.
A word of caution: the Copyright Office has made clear that misrepresenting the use of AI in the music generation process is fraudulent. And although the Copyright Office solely relies on facts stated in applications, both it and future litigants are likely to soon deploy AI-detecting software to verify the extent to which AI was used to generate the musical work.
AI “training” looms as the first major battle ground
Generative AI software (like Magenta) is “trained” by feeding it vast quantities of content – text, lyrics, code, audio, written compositions – and then programming it to use that source material to generate new material. In October 2022, the RIAA shot a warning flare by declaring that AI-based extractors and mixers were infringing its members’ rights by using their music to train their AI models. Those that side with the RIAA argue that AI’s mindboggling ingestion of copyrighted music violates the Copyright Act’s exclusive rights to reproduce and create “derivative works” based upon one or more preexisting works. Because generative AI produces output “based upon” preexisting works (input), copyright owners insist that a license is needed.
On the other hand, AI-advocates argue that the use of such data for training falls within copyright law’s “fair use” exception, claiming that the resulting work is transformative, does not create substantially similar works, and has no material impact on the original work’s market. They contend that the training data has been sufficiently transformed by the AI process to yield musical works beyond the copyright protection of the original works.
These competing views are likely to be tested in the class action lawsuit just filed on behalf of a group of artists against Stability AI, DeviantArt, and Midjourney for allegedly infringing “billions of copyrighted images” in creating AI art. (Getty Images recently filed a comparable lawsuit against Stability AI in the U.K.).
Proving infringement with AI-works
How exactly the AI was trained and operates will be issues in copyright infringement litigations. Proving infringement is a two-step process. The plaintiff must demonstrate that copying occurred; and that the copying is unlawful, because the defendant copied too much of the plaintiff’s protected expression and is, therefore, substantially similar.
The first of these inquiries can be proven by direct evidence of copying or circumstantially by establishing access to a specific, allegedly infringed musical work. With art, there is a Spawning AI software called “Have I Been Trained” which allows users to search through the images used to train AI art generators. While no known current analog exists for music, the technology is likely imminent.
The nature of the AI instructions will also be crucial to showing an awareness of the original work and substantial similarity between the AI-generated music and the allegedly infringed music. Prompts that intentionally draw on copyrighted works (i.e., create a work in “the style of _”) undoubtedly bear on the issue of substantial similarity. The marketplace is pivoting in advance of anticipated rulings: Songmastr has, for example, stopped marketing its ability to create songs based on the styles of Beyonce and Taylor Swift.
AI is evolving faster than the courts can evaluate how laws apply to it. The just-filed art litigation may provide some clarity; however, while in the fog, those creating AI-generated music are well-advised to stay cognizant of the legal risks and guide the artificial music making process with a genuine human touch.
James Sammataro is a partner and Nicholas Saady an associate at Pryor Cashman LLP.
Lincoln said, “A house divided against itself cannot stand.” It was true then and it’s true today — on great issues like politics and governance and, closer to home, for America’s music community.
We know the costs of division and mistrust. During the Napster era, we lost nearly half the revenue from recorded music. Working together over the last 10 years, we’ve built a robust and thriving streaming economy well on its way to recovering what was lost. But we still have a long way to go.
From powerful platforms that undervalue music to short-sighted attacks on creators’ rights around the world to abuses of new technologies that attack the very idea of human authorship — it’s more important than ever that we unite to face new challenges in 2023 and beyond.
And we know how to do it.
In recent years, the music industry has joined together over and over again to accomplish great things and move music forward.
In 2018, we enacted “once-in-a-generation” Music Modernization Act legislation here in the U.S. to update streaming rights for songwriters and ensure legacy artists are finally paid. We are now working together to protect artists’ free expression through bills like California’s Decriminalizing Artistic Expression Act and the federal RAP Act.
In 2021, we saw a landmark Copyright Directive in the European Union to strengthen music markets and fair pay for artists on all platforms.
Earlier this year, all three major record labels decided to voluntarily disregard unrecouped balances owed by certain legacy performers ensuring these great artists could immediately share in streaming royalties.
And of course, we supported one another through a devastating pandemic, working to sustain small venues and develop public policies and relief programs to reach working artists and songwriters.
Those were all major steps, but new challenges keep coming — including some designed to stoke division and turn our community against itself. Fortunately, we know from our many recent achievements that the music community — and music itself — does best when we stick together in the face of common challenges. Especially at a time when American music is already thriving — across formats, styles, and all around the world with competition, creativity and choice all stronger than ever.
Artists continued to find new ways to reach more fans than ever with do-it-yourself recording and distribution, while independent labels have become the fastest-growing sector of the market. In a shrinking online world where language and geography are no longer barriers, an artist’s potential audience has become almost limitless.
In this dynamic new music business, success is more broadly shared than ever, with growing opportunities and revenues for indie artists and the very top acts taking a smaller share of revenues today than during the CD era. Globally, out of a $10 per month streaming subscription, artists receive roughly $1.35 while labels net $0.55 once the cost of spending to drive artists’ success is accounted for. Meanwhile, the share of revenues going to publishers and songwriters has nearly doubled in the streaming era.
It’s a powerful testament to what all of us who make up the music community have built together.
It is success borne first from the blood, sweat and tears of America’s creators — artists, songwriters, session players and the legions of those who support and distribute music — producers, publishers, road crews and venue operators, tour support, managers, digital services and more.
It is also the product of round-the-clock drive and commitment by the people working at record labels– music lovers who wake up every day fighting for the artists they work with and helping them achieve their creative dreams and commercial goals. From marketing and promotion to brand and design to social media campaigns to wellness and health to business and back office services, labels today do more than ever to support artists and position them to break new ground and thrive.
The labels that make up the RIAA are committed to a future of continued shared growth. We are determined to keep pushing for even more positive change. And we will work every day in this new year to unite our music community with forward-looking policies and goals that benefit artists, songwriters and fans as well as rightsholders and music services.
That means standing together and ensuring creators get full value for their work on every platform, service, game, fitness app and anywhere else it is used — from AM/FM radio to the metaverse. It also means building on shared commitments to diversity, wellness, and equality — both inside and outside the recording studio and across our entire community.
Additionally, it means presenting a united front when tackling the next generation of challenges, including artificial intelligence, where artists, songwriters, labels and publishers have an immense and shared interest in establishing responsible rules of the road that value human authorship and creativity. Also important is fighting against new forms of music piracy and other efforts to undermine the creative economy, from stream ripping to stream manipulation to pre-release leaks that suck the economic value out of the most seminal times in an artist’s career.
All of us who make up this community are bound together by a shared love of music — and a shared commitment to the people who create, distribute, and listen to it.
In 2023, let’s work — together — to turn those values into concrete action that builds a rich and lasting music future for us all.
Mitch Glazier is the Chairman and CEO of the Recording Industry Association of America (RIAA), the trade organization that supports and promotes the creative and financial vitality of the recorded major music companies.
In the recent article “What Happens To Songwriters When AI Can Generate Music,” Alex Mitchell offers a rosy view of a future of AI-composed music coexisting in perfect barbershop harmony with human creators — but there is a conflict of interest here, as Mitchell is the CEO of an app that does precisely that. It’s almost like cigarette companies in the 1920s saying cigarettes are good for you.
Yes, the honeymoon of new possibilities is sexy, but let’s not pretend this is benefiting the human artist as much as corporate clients who’d rather pull a slot machine lever to generate a jingle than hire a human.
While I agree there are parallels between the invention of the synthesizer and AI, there are stark differences, too. The debut of the theremin — the first electronic instrument — playing the part of a lead violin in an orchestra was scandalous and fear-evoking. Audiences hated its sinusoidal wave lack of nuance, and some claimed it was “the end of music.” That seems ludicrous and pearl-clutching now, and I worship the chapter of electrified instruments afterward (thank you sister Rosetta Tharpe and Chuck Berry), but in a way, they were right. It was the closing of a chapter, and the birth of something new.
Is new always better, though? Or is there a sweet spot ratio of machine to human? I often wonder this sitting in my half analog, half digital studio, as the stakes get ever higher from flirting with the event horizon of technology.
In this same article, Diaa El All (another CEO of an A.I. music generation app), claims that drummers were pointlessly scared of the drum machine and sample banks replacing their jobs because it’s all just another fabulous tool. (Guess he hasn’t been to many shows where singers perform with just a laptop.) Since I have spent an indecent portion of my modeling money collecting vintage drum machines (cuz yes, they’re fabulous), I can attest to the fact I do indeed hire fewer drummers. In fact, since I started using sample libraries, I hire fewer musicians altogether. While this is a great convenience for me, the average upright bassist who used to be able to support his family with his trade now has to remain childless or take two other jobs.
Should we halt progress for maintaining placebo usefulness for obsolete craftsmen? No, change and competition are good, if not inevitable ergonomics. But let’s not be naive about the casualties.
The gun and the samurai come to mind. For centuries, samurai were part of an elite warrior class who rigorously trained in kendo (the way of the sword) and bushido (a moral code of honor and indifference to pain) since childhood. As a result, winning wars was a meritocracy of skill and strategy. Then a Chinese ship with Portuguese sailors showed up with guns.
When feudal lord Nobunaga saw the potential in these contraptions, he ordered hundreds be made for his troops. Suddenly a farmer boy with no skill could take down an archer or swordsman who had trained for years. Once more coordinated marching and reloading formations were developed, it was an entirely new power dynamic.
During the economic crunch of the Napoleonic wars, a similar tidal shift occurred. Automated textile equipment allowed factory owners to replace loyal employees with machines and fewer, cheaper, less skilled workers to oversee them. As a result of jobless destitution, there was a region-wide rebellion of weavers and Luddites burning mills, stocking frames and lace-making machines, until the army executed them and held show trials to deter others from acts of “industrial sabotage.”
The poet Lord Byron opposed this new legislation, which called machine-breaking a capital crime — ironic considering his daughter, Ada Lovelace, would go on to invent computers with Charles Babbage. Oh, the tangled neural networks we weave.
Look what Netflix did to Blockbuster rentals. Or what Napster did to the recording artist. Even what the democratization of homemade porn streaming did to the porn industry. More recently, video games have usurped films. You cannot add something to an ecosystem without subtracting something else. It would be like smartphone companies telling fax machine manufacturers not to worry. Only this time, the fax machines are humans.
Later in the article, Mac Boucher (creative technologist and co-creator of non-fungible token project WarNymph along with his sister Grimes) adds another glowing review of bot- and button-based composition: “We will all become creators now.”
If everyone is a creator, is anyone really a creator?
An eerie vision comes to mind of a million TikTokers dressed as opera singers on stage, standing on the blueish corpses of an orchestra pit, singing over each other in a vainglorious cacophony, while not a single person sits in the audience. Just rows of empty seats reverberating the pink noise of digital narcissism back at them. Silent disco meets the Star Gate sequence’s death choir stack.
While this might sound like the bitter gatekeeping of a tape machine purist (only slightly), now might be a good time to admit I was one of the early projects to incorporate AI-generated lyrics and imagery. My band, Uni and The Urchins, has a morbid fascination with futurism and the wild west of Web 3.0. Who doesn’t love robots?
But I do think in order to make art, the “obstacles” actually served as a filtration device. Think Campbell’s hero’s journey. The learning curve of mastering an instrument, the physical adventure of discovering new music at a record shop or befriending the cool older guy to get his Sharpie-graffitied mix CD, saving up to buy your first guitar, enduring ridicule, the irrational desire to pursue music against the odds (James Brown didn’t own a pair of shoes until he 8 years old, and now is canonized as King.)
Meanwhile, in 2022, surveys show that many kids feel valueless unless they’re an influencer or “artist,” so the urge toward content creation over craft has become criminally easy, flooding the markets with more karaoke, pantomime and metric-based mush, rooted in no authentic movement. (I guess Twee capitalist-core is a culture, but not compared to the Vietnam war, slavery, the space race, the invention of LSD, the discovery of the subconscious, Indian gurus, the sexual revolution or the ’90s heroin epidemic all inspiring new genres.)
Not to sound like Ted Kaczynski’s manifesto, but technology is increasingly the hand inside the sock puppet, not the other way around.
Do I think AI will replace a lot of jobs? Yes, though not immediately, it’s still crude. Do I think this upending is a net loss? In the long term, no, it could incentivize us to invent entirely new skills to front-run it. (Remember when “learn to code” was an offensive meme?) In fact, I’m very eager to see how we co-evolve or eventually merge into a transhuman cyber Seraphim, once Artificial General Intelligence goes quantum.
But this will be a Faustian trade, have no illusions.
Charlotte Kemp Muhl is the bassist for NYC art-rock band UNI and the Urchins. She has directed all of UNI and The Urchins’ videos and mini-films and engineered, mixed and mastered their upcoming debut album Simulator (out Jan. 13, 2023, on Chimera Music) herself. UNI and the Urchins’ AI-written song/AI-made video for “Simulator” is out now.