artificial intelligence
Page: 14
A bipartisan group of U.S. senators released draft legislation Thursday (Oct. 12) aimed at protecting musical artists and others from artificial intelligence-generated deepfakes and other replicas of their likeness, like this year’s infamous “Fake Drake” song.
The draft bill – labelled the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act, or NO FAKES Act — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission.
In announcing the bill, Sen. Chris Coons (D-Del.) specifically cited the April release of “Heart On My Sleeve,” an unauthorized song that featured AI-generated fake vocals from Drake and The Weeknd.
“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”
The draft bill quickly drew applause from music industry groups. The RIAA said it would push for a final version that “effectively protects against this illegal and immoral misappropriation of fundamental rights that protect human achievement.”
“Our industry has long embraced technology and innovation, including AI, but many of the recent generative AI models infringe on rights — essentially instruments of theft rather than constructive tools aiding human creativity,” the group wrote in the statement.
The American Association of Independent Music offered similar praise: “Independent record labels and the artists they work with are excited about the promise of AI to transform how music is made and how consumers enjoy art, but there must be guardrails to ensure that artists can make a living and that labels can recoup their investments.” The group said it would push to make sure that the final bill’s provisions were “accessible to small labels and working-class musicians, not just the megastars.”
A person’s name and likeness — including their distinctive voice — are already protected in most states by the so-called right of publicity, which allows control how your individual identity is commercially exploited by others. But those rights are currently governed by a patchwork of state statutes and common law systems.
The NO FAKES Act would create a nationwide property right in your image, voice, or visual likeness, allowing an individual to sue anyone the produced a “newly-created, computer-generated, electronic representation” of it. Unlike many state-law systems, that right would not expire at death and could be controlled by a person’s heirs for 70 years after their passing.
A tricky balancing act for any publicity rights legislation is the First Amendment and its protections for free speech. In Thursday’s announcementthe NO FAKES Act’s authors said the bill would include specific carveouts for replicas used in news coverage, parody, historical works or criticism.
“Congress must strike the right balance to defend individual rights, abide by the First Amendment, and foster AI innovation and creativity,” Coons said.
The draft was co-authored by Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.), and Sen. Thom Tillis (R-N.C.).
The RIAA has asked to have AI voice cloning added to the government’s piracy watch list, officially known as the Review of Notorious Markets for Counterfeiting and Piracy.
The RIAA typically writes in each year, requesting forms of piracy like torrenting, stream ripping, cyber lockers and free music downloading to be included in the final list. All of these categories of piracy are still present in the RIAA’s letter to the U.S. Trade Representative this year, but this is the first time the trade organization, which represents the interest of record labels, has added a form of generative AI to their recommendations.
The RIAA noted that it believes AI voice cloning, also referred to as ‘AI voice synthesis’ or ‘AI voice filters,’ infringes on their members’ copyrights and the artists’ rights to their voices and calls out one U.S.-based AI voice cloning site, Voicify.AI as one that should specifically face scrutiny.
According to the letter, Voicify.AI’s service includes voice models that emulate sound recording artists like Michael Jackson, Justin Bieber, Ariana Grande, Taylor Swift, Elvis Presley, Bruno Mars, Eminem, Harry Styles, Adele, Ed Sheeran, and others, as well as political figures including Donald Trump, Joe Biden, and Barak Obama.
The RIAA claims that this type of service infringes on copyrights because it “stream-rips the YouTube video selected by the user, copies the acapella from the track, modifies the acapella using the AI vocal model, and then provides the user unauthorized copies of the modified acapella stem, the underlying instrumental bed, and the modified remixed recording.” Essentially, some of these AI voice cloning sites train its models on stolen copyrights.
It additionally claims that there is a violation pf the artists’ right of publicity, the right that protects public figures from having their name, likeness, and voice commercially exploited without their permission. This is a more tenuous right, given it is only a state-level protection and its strength varies by state. It also becomes more limited after a public figure’s death. However, this is possibly the most common legal argument against AI voice cloning technology in the music business.
This form of artificial intelligence first became widely recognized last spring, when an anonymous TikTok user named Ghostwriter used AI to mimic the voices of Drake and The Weeknd in his song “Heart On My Sleeve” with shocking precision. The song was briefly available on streaming services, like YouTube, but was taken down after a stern letter from the artists’ label, Universal Music Group. However, the song was ultimately removed from official services due to a copyright infringement in the track, not because of a right of publicity claim.
A few months later, Billboard reported that streamers were in talks with the three major label groups about allowing them to file take down requests for right of publicity violations — something which previously was only allowed in cases of copyright infringement as dictated in the Digital Millennium Copyright Act (DMCA). Unlike the DMCA, the newly discussed arrangement regarding right of publicity issues would be a voluntary one. In July, UMG’s general counsel and executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright and asked for a new “federal right of publicity” to be made into law to protect artists’ voices.
An additional challenge in regulating this area is that many AI models available on the internet for global users are not based in the U.S., meaning the U.S. government has little recourse to stop their alleged piracy, even if alerted by trade organizations like the RIAA. Certain countries are known to be more relaxed on AI regulation — like China, Israel, South Korea, Japan, and Singapore — which has created safe havens for AI companies to grow abroad.
The U.S. Trade Representative still must review this letter from the RIAA as well as other recommendations from other industry groups and determine whether or not they believe AI voice cloning should be included on the watchlist. The office will likely issue their final review at the start of next year.
“OK — now Ghostwriter is ready for us.”
For almost three hours, I have been driving an airport rental car to an undisclosed location — accompanied by an artist manager whose name I only know in confidence — outside the U.S. city we both just flew into. I came here because, after weeks of back-and-forth email negotiations, the manager has promised that I can meet his client, whom I’ve interviewed once off-camera over Zoom, in person. In good traffic, the town we’re headed toward is about an hour from the airport, but it’s Friday rush hour, so we watch as my Google Maps ETA gets later and later with each passing minute. To fill the time, we chat about TikTok trends, our respective careers and the future of artificial intelligence.
AI is, after all, the reason we’re in this car in the first place. The mysterious man I’ve come to meet is a “well-known” professional songwriter-producer, his manager says — at least when he’s using his real name. But under his pseudonym, Ghostwriter, he is best known for creating “Heart on My Sleeve,” a song that employed AI voice filters to imitate Drake and The Weeknd’s voices with shocking precision — and without their consent. When it was posted to TikTok in the spring, it became one of the biggest music stories of the year, as well as one of the most controversial.
At the time of its release, many listeners believed that Ghost’s use of AI to make the song meant that a computer also generated the beat, lyrics or melodies, but as Ghost later explains to me, “It is definitely my songwriting, my production and my voice.” Still, “Heart on My Sleeve” posed pressing ethical questions: For one, how could an artist maintain control over their vocal likeness in this new age of AI? But as Ghost and his manager see it, AI poses a new opportunity for artists to license their voices for additional income and marketing reach, as well as for songwriters like Ghost to share their skills, improve their pitches to artists and even earn extra income.
As we finally pull into the sleepy town where we’re already late to meet with Ghost, his manager asks if I can stall. “Ghost isn’t quite ready,” he says, which I assume means he’s not yet wearing the disguise he dons in all his TikTok videos: a white bedsheet and black sunglasses. (Both the manager and Ghost agreed to this meeting under condition of total anonymity.) As I weave the car through residential streets at random, passing a few front yards already adorned in Halloween decor, I laugh to myself — it feels like an apropos precursor to our meeting.
But fifteen minutes later, when we enter Ghost’s “friend’s house,” I find him sitting at the back of an open-concept living space, at a dining room table, dressed head to toe in black: black hoodie, black sweatpants, black ski mask, black gloves and ski goggles. Not an inch of skin is visible, apart from short glimpses of the peach-colored nape of his neck when he turns his head a certain way.
Though he appears a little nervous to be talking to a reporter for the first time, Ghost is friendly, standing up from his chair to give me a hug and to greet his manager. When I decide to address the elephant in the room — “I know this is weird for all of us” — everyone laughs, maybe a little too hard.
Over the course of our first virtual conversation and, now, this face-to-masked-face one, Ghost and his manager openly discuss their last six months for the first time, from their decision to release “Heart on My Sleeve” to more recent events. Just weeks ago, Ghost returned with a second single, “Whiplash,” posted to TikTok using the voices of 21 Savage and Travis Scott — and with the ambition to get his music on the Grammy Awards ballot.
In a Sept. 5 New York Times story, Recording Academy CEO Harvey Mason Jr. said “Heart on My Sleeve” was “absolutely [Grammy-]eligible because it was written by a human,” making it the first song employing AI voices to be permitted on the ballot. Three days later, however, he appeared to walk back his comments in a video posted to his personal social media, saying, “This version of ‘Heart on My Sleeve’ using the AI voice modeling that sounds like Drake and The Weeknd, it’s not eligible for Grammy consideration.”
[embedded content]
In conversation, Ghost and his manager maintain (and a Recording Academy representative later confirms) that “Heart on My Sleeve” will, in fact, be on the ballot because they quietly uploaded a new version of the song (without any AI voice filters) to streaming services on Sept. 8, just days before Grammy eligibility cutoff and the same day as Mason’s statement.
When the interview concludes, Ghost’s manager asks if we will stay for the takeout barbecue the owner of the house ordered for everyone before the manager and I arrived. At this, Ghost stands up, saying his outfit is too hot and that he ate earlier anyway — or maybe he just realizes that eating would require taking his ski mask off in front of me.
When did Ghostwriter first approach you with this idea, and what were your initial thoughts?
Manager: We first discussed this not long before the first song dropped. He had just started getting into AI. We wanted to do something that could spark much needed conversation and prep us so that we can start moving toward building an environment where this can exist in an ethical and equitable way. What better way to move culture forward around AI than to create some examples of how it can be used and show how the demand and interest is there?
As the person in charge of Ghostwriter’s business affairs, what hurdles did you see to executing his idea?
Manager: When anything new happens, people don’t know how to react. I see a lot of parallels between this moment and the advent of sampling. There was an outcry [about] thievery in 1989 when De La Soul was sued for an uncleared sample. Fast-forward to now, and artist estates are jumping at the opportunity to be sampled and interpolated in the next big hit. All it took was for the industry to define an equitable arrangement for all stakeholders in order for people to see the value in that new form of creativity. I think we agreed that we had an opportunity to show people the value in AI and music here.
Ghostwriter’s songs weren’t created with the consent of Drake, The Weeknd, Travis Scott or 21 Savage. How do you justify using artists’ voices without their consent?
Manager: I like to say that everything starts somewhere, like Spotify wouldn’t exist without Napster. Nothing is perfect in the beginning. That’s just the reality of things. Hopefully, people will see all the value that lies here.
How did you get in touch with the Recording Academy?
Manager: Harvey reached out to Ghostwriter over DM. He was just curious and interested. It’s his job to keep the industry moving forward and to understand what new things are happening. I think he’s still wrapping his head around it, but I thought it was really cool that he put together an industry roundtable with some of the brightest minds — including people in the Copyright Office, legal departments at labels, Spotify, Ghostwriter. We had an open conversation.
I don’t know if Harvey has the answers — and I don’t want to put words in his mouth — but I think he sees that this is a cool tool to help people create great music. [Ultimately,] we just have to figure out the business model so that all stakeholders feel like they have control and are being taken care of.
I think in the near future, we’re going to have infrastructure that allows artists to not only license their voice, but do so with permissions. Like, say I’m artist X. I want to license my voice out, but I want to take 50% of the revenue that’s generated. Plus users can’t use my voice for hate speech or politics. It is possible to create tech that can have permissions like that. I think that’s where we are headed.
“Heart on My Sleeve” is Grammy-eligible after all, but only the version without AI voice filters. Why was it so important to keep trying for Grammy eligibility?
Manager: Our thought process was, it’s a dope record, and it resonated with people. It was a human creator who created this piece of art that made the entire music industry stop and pay attention. We aren’t worried about whether we win or not — this is about planting the seed, the idea that this is a creative tool for songwriters.
Do you still think it pushes the envelope in the same way, given that what is eligible now doesn’t have any AI filter on it?
Manager: Absolutely, because we’re just trying to highlight the fact that this song was created by a human. AI voice filters were just a tool. We haven’t changed the moment around the song that it had. I think it’s still as impactful because all of this is part of the story, the vision we are casting.
Tell me a little about yourself, Ghostwriter. What’s your background?
Ghostwriter: I’ve always been a songwriter-producer. Over time, I started to realize — as I started to get into different rooms and connect with different artists — that the business of songwriting was off. Songwriters get paid close to nothing. It caused me to think: “What can I do as a songwriter who just loves creating to maybe create another revenue stream? How do I get my voice heard as a songwriter?” That was the seed that later grew into becoming Ghostwriter.
I’ve been thinking about it for two years, honestly. The idea at first was to create music that feels like other artists and release it as Ghostwriter. Then when the AI tech came out, things just clicked. I realized, “Wait — people wouldn’t have to guess who this song was meant to sound like anymore,” now that we have this.
I did write and produce “Heart on My Sleeve” thinking that maybe this would be the one where I tried AI to add in voice filters, but the overall idea for Ghostwriter has been a piece of me for some time.
Why did you decide to take “Heart on My Sleeve” from just a fun experiment to a formal rollout?
Ghost: Up until this point, all of the AI voice stuff was jokes. Like, what if SpongeBob [SquarePants] sang this? I think it was exciting for me to try using this as a tool for actual songwriters.
When “Heart on My Sleeve” went viral, it became one of the biggest news stories at the time. Did you anticipate that?
Ghost: There was a piece of me that knew it was really special, but you just can’t predict what happens. I tried to stay realistic. When working in music, you have to remind yourself that even though you think you wrote an incredible song, there’s still a good chance the song is not going to come out or it won’t do well.
Do you think that age played a factor in how people responded to this song?
Manager: For sure. I think the older generations are more purists; it’s a tougher pill for them to swallow. I think younger generations obviously have grown up in an environment where tech moves quickly. They are more open to change and progression. I would absolutely attribute the good response on TikTok to that.
Are you still writing for other people now under your real name while you work on the Ghostwriter project, or are you solely focused on Ghostwriter right now?
Ghost: I am, but I have been placing a large amount of focus [on] Ghostwriter. For me, it’s a place that is so refreshing. Like, I love seeing that an artist is looking for pitch records and I have to figure out how to fit their sound. It’s a beautiful challenge.
This is one of the reasons I’m so passionate about Ghostwriter. There are so many talented songwriters that are able to chameleon themselves in the studio to fit the artist they are writing for. Even their vocal delivery, their timbre, where the artist is in their life story. That skill is what I get to showcase with Ghostwriter.
You’ve said songwriters aren’t treated fairly in today’s music industry. Was there a moment when you had this revelation?
Ghost: It was more of a progression…
Manager: I think the fact that Ghost’s songs feel so much like the real thing and resonate so much with those fan bases, despite the artists not actually being involved, proves how important songwriters are to the success of artists’ projects. We’re in no way trying to diminish the hard work and deserving nature of the artists and the labels that support them. We’re just trying to shine a light on the value that songwriters bring and that their compensation currently doesn’t match that contribution. We owe it to songwriters to find solutions for the new reality. Maybe this is the solution.
Ghost: How many incredible songs are sitting on songwriters and producers’ desktops that will never be heard by the world? It almost hurts me to think about that. The Ghostwriter project — if people will hopefully support it — is about not throwing art in the trash. I think there’s a way for artists to help provide that beauty to the world without having to put in work themselves. They just have to license their voices.
The counterpoint to that, though, is that artists want to curate their discographies. They make a lot of songs, but they might toss some of them so that they can present a singular vision — and many would say songs using AI to replicate an artist’s voice would confuse that vision. What do you say to that?
Ghost: I think this may be a simple solution, but the songs could be labeled as clearly separate from the artist.
Manager: That’s something we have done since the beginning. We have always clearly labeled everything as AI.
Ideally, where should these AI songs live? Do they belong on traditional streaming services?
Manager: One way that this can play out is that [digital service providers] eventually create sort of an AI section where the artist who licenses their voice can determine how much of the AI songs they want monetarily and how they want their voices to be used.
Ghost: These songs are going to live somewhere because the fans want them. We’ve experienced that with Ghostwriter. The song is not available anymore by us, but I was just out in my area and heard someone playing “Heart on My Sleeve” in their car as they drove by. One way or another, we as the music industry need to come to terms with the fact that good music is always going to win. The consumer and the listener are always in the seat of power.
There’s 100,000 songs added to Spotify every day, and the scale of music creation is unprecedented. Does your vision of the future contribute to a scale problem?
Manager: We don’t really see it as a problem. Because no matter how many people are releasing music, you know, there’s only going to be so many people in the world that can write hit songs. The cream always rises to the top.
Ghost: My concern is that a lot of that cream-of-the-crop music is just sitting on someone’s desktop because an artist moved in a different direction or something beyond their control. My hope is we’ll see incredible new music become available and then we can watch as democracy pushes it to the top.
Can you explain how you think AI voice filters serve as a possible new revenue stream for artists?
Manager: Imagine singing a karaoke song in the artist’s voice; a personalized birthday message from your favorite artist; a hit record that is clearly labeled and categorized as AI. It’s also a marketing driver. I compare this to fan fiction — a fan-generated genre of music. Some might feel this creates competition or steals attention away from an artist’s own music, but I would disagree.
We shouldn’t forget that in the early days of YouTube, artists and labels fought to remove every piece of fan-generated content [that used] copyrighted material that they could. Now a decade or so later, almost every music marketing effort centers around encouraging [user-generated content]: TikTok trends, lyric videos, dance choreography, covers, etcetera. There’s inherent value in empowering fans to create content that uses your image and likeness. I think AI voice filters are another iteration of UGC.
Timbaland recently wrote a song and used an AI voice filter to map The Notorious B.I.G.’s voice on top of it, essentially bringing Biggie back from the dead. That raises more ethical questions. Do you think using the voice of someone who is dead requires different consideration?
Manager: It’s an interesting thought. Obviously, there’s a lot of value here for companies that purchase catalogs. I think this all ties back to fan fiction. I love The Doors, and I know there are people who, like me, study exactly how they wrote and performed their songs. I’d love to hear a song from them I haven’t heard before personally, as long as it’s labeled [as a fan-made AI song]. As a music fan, it would be fun for me to consume. It’s like if you watch a film franchise and the fourth film isn’t directed by the same person as before. It’s not the same, but I’m still interested.
When Ghostwriter introduced “Whiplash,” he noted that he’s down to collaborate with and send royalties to Travis Scott and 21 Savage. Have you gotten in touch with them, or Drake or The Weeknd, yet?
Manager: No, we have not been in contact with anyone.
“Heart on My Sleeve” was taken down immediately from streaming services. Are you going about the release of “Whiplash” differently?
Manager: We will not release a song on streaming platforms again without getting the artists on board. That last time was an experiment to prove the market was there, but we are not here to agitate or cause problems.
You’ve said that other artists have reached out to your team about working together and using their voices through AI. Have you started that collaboration process?
Manager: We’re still having conversations with artists we are excited about that have reached out, but they probably won’t create the sort of moment that we want to keep consistently with this project. There’s nothing I can confirm with you right now, but hopefully soon.
Why are you not interested in collaborating with who has reached out so far? Is it because of the artist’s audience size or their genre?
Manager: It’s more like every moment we have has to add a point and purpose. There hasn’t been anyone yet that feels like they could drive things forward in a meaningful way. I mean, size for sure, and relevancy. We ask ourselves: What does doing a song with that person or act say about the utility and the value of this technology?
Ghost: We’re just always concerned with the bigger picture. When “Whiplash” happened, we all felt like it was right. It was part of a statement I wanted to make about where we were headed. This project is about messaging.
After all this back-and-forth about the eligibility of “Heart on My Sleeve,” do you both feel you’re still in a good place with Harvey Mason Jr. and the Recording Academy?
Manager: For sure, we have nothing but love for Harvey … We have a lot of respect for him, the academy and, ultimately, a lot of respect for all the opinions and arguments out there being made about this. We hear them all and are thinking deeply about it.
Ghostwriter, you’ve opted to not reveal your identity in this interview, but does any part of you wish you could shout from the rooftops that you’re the one behind this project?
Ghost: Maybe it sounds cheesy, but this is a lot bigger than me and Ghostwriter. It’s the future of music. I want to push the needle forward, and if I get to play a significant part in that, then there’s nothing cooler than that to me. I think that’s enough for me.
A version of this story originally appeared in the Oct. 7, 2023, issue of Billboard.
AI-powered hit song analytics platform ChartCipher has successfully completed its beta phase and is now accessible to the public, MyPart and Hit Songs Deconstructed jointly announced on Tuesday (Oct. 10).
“Our mission is to empower music creatives and industry professionals with comprehensive, real-time insights into the DNA of today’s most successful songs, and the trends shaping the music charts,” said Hit Songs Deconstructed co-founder Yael Penn in a statement. “Streams, engagement, and other performance metrics only tell part of the story. ChartCipher is the missing link. It provides comprehensive data reflecting the compositional, lyrical and sonic qualities fueling today’s charts.”
Added Hit Songs Deconstructed co-founder David Penn, “The correlations we can now draw between songwriting and production, spanning various genres and charts, offer unprecedented insights that have the potential to significantly enhance both the creative journey and the decision-making process.”
“ChartCipher’s beta phase confirmed that our AI analytics provide invaluable insights to music creatives and decision-makers,” said MyPart CEO Matan Kollenscher. “From selecting singles through exploring remix and collaboration opportunities to optimizing marketing investments and maximizing catalog utilization, ChartCipher equips users with unique, actionable data vital to making better informed business and creative decisions and understanding the musical landscape.”
Launched in April 2022, ChartCipher combines MyPart’s AI-powered analysis of songs’ compositional, lyrical and sonic qualities with Hit Songs Deconstructed’s analytics delivery platform and song analysis methodologies to offer real-time insights into the qualities that fuel today’s most popular music. The platform utilizes analytics from 10 of Billboard‘s most prominent charts going back to the turn of the century: the Billboard Hot 100, Hot Country Songs, Hot R&B/Hip-Hop Songs, Hot Dance/Electronic Songs, Hot Rock & Alternative Songs, Pop Airplay, Country Airplay, Streaming Songs, Radio Songs and Digital Song Sales.
“Billboard has consistently led the way in global music charts, and we are thrilled to introduce ChartCipher with analytics for 10 of their most prominent charts,” added Yael Penn. “Our longstanding relationship with Billboard, spanning over a decade, marks the start of an exciting new chapter. Together, we aim to provide even deeper, more actionable insights into the driving forces behind today’s most successful songs.”
Gary Trust, senior director of charts at Billboard, added, “Spotlighting ChartCipher’s intriguing insights about the sonic makeup of hit songs further rounds out Billboard’s coverage. We’re excited to add even more analysis of popular charting songs to our reporting on streaming, radio airplay and sales data, as provided by Luminate.”
To celebrate its official launch, ChartCipher has created a Billboard Hot 100 quiz available to anyone who would like to test their knowledge of the compositional, lyrical and production qualities driving the chart.
YouTube recently launched an AI Music incubator with artists and producers from Universal Music Group. The purpose of the group, according to Universal CEO Lucian Grainge, is to explore, experiment, and offer feedback on the AI-related musician tools and products the Google team is researching — with the hope that more artists will benefit from YouTube’s creative suite.
This partnership demonstrates the clear desire to involve the industry in the development stages of AI products and protect the human component of artistry. This desire is heightened in the face of deep fakes. Just last month, Google launched its Synth ID watermark meant to spot AI-generated images (Google DeepMind CEO Denis Hassabis cited the importance of deepfake detection ahead of a contentious election season). “Heart on My Sleeve,” the song created with AI-generated voices of fake Drake and The Weeknd kicked off the music industry’s scramble to shut down and stamp out any unauthorized use of artists’ voices. Most importantly, though, the viral track proved that AI voice models are here and only improving with each passing day.
As artists, labels, and other rights holders have grown more concerned about AI models learning and profiting from their copyrighted material, fans and creators have discovered new ways to engage with their favorite artists and imagine completely new musical works using their AI voice models. This is prompting other industry executives (myself included) to wonder how these models can continue to be used to explore this new creative frontier of music while protecting artists.
With all of this in mind, the industry needs to mull over a few philosophical questions and consider the distinction between voice cloning and voice synthesis. A singer is much more than timbre, the primary quality that voice models modify in a voice. AI voices are not the same as samples, where the whole vocal element is based on an underlying artist’s full performance which would include pitch, emotion, timbre, accent, tone, etc.
Regardless, AI innovations will only reach their maximum potential if the industry faces one foundational issue: artists and their labels need to control the ways in which their image, likeness and voice are used. Whether the industry decides to embrace these innovations or limit AI-powered cloning entirely, the next step begins with synthetic voice detection. Is the artist singing on any given track fake or the real deal?
In the early 2000s, music companies found themselves losing control of their content to the digitalization of music. The industry’s initial impulse to crush file-sharing networks like Napster led to the launch of Apple’s iTunes store in 2003 and, eventually, legal streaming. Other digital rights management tools, like ContentID on YouTube, were developed to detect unauthorized use of music. Once the industry learned to embrace digital music and formed a foundational infrastructure to support it, streaming revenues soared — breaking the $10 billion mark for the first time in 2022 and making up 84% of the industry’s total revenue, according to the RIAA.
The industry needs synthetic voice detection, but with 120,000 new tracks uploaded to streaming platforms daily (according to Luminate) on top of the already existing back catalogs, can it be done accurately and at scale? The short answer: yes.
As the industry begins to embrace the responsible use of AI for synthetic voice creation, I strongly believe there should be a corresponding willingness for artists and labels to collaborate in that training process. It’s in their best interests to do this now. AI applications are already scaling in a variety of categories. Well-engineered models are becoming exponentially more efficient and can increasingly manage massive computing tasks. Combined with strategic operational approaches, this is achievable today.
To honor each artist’s decision whether or not to participate in voice models, the industry needs an easy and accessible way for artists to build their own voice models and grant fans and creators permission to use it. This type of initiative paired with synthetic voice detection ensures that only the voices and works of those who want to be involved in voice cloning and other derivative AI tools are used. Artists who want to create their own voice models can work with voice synthesis platforms to establish the terms of where and how their voice model can be used–offering more control and even opportunities for monetization.
Geraldo Ramos is the co-founder and CEO of Moises, the AI-driven music platform that is transforming the way artists and businesses incorporate AI technology into their workflows.
Warner Music Group CEO Robert Kyncl has a message for a music industry facing disruption from artificial intelligence that’s often likened to the rise of file-sharing a quarter century ago: “You have to embrace technology, because it’s not like you can put technology in a bottle,” he said during an onstage interview at the Code […]
Lyor Cohen discussed “a future where generative AI has a profound impact on music” at the annual Made on YouTube event on Thursday (Sept. 21). YouTube’s longtime global head of music is nothing if not enthusiastic about artificial intelligence and its potential ability to supercharge music-making. Cohen told the attendees that “AI tools are opening up a new playground for creativity;” AI “can be used by artists to amplify and accelerate their creativity;” and AI can usher in “a new era of musical creativity.”
Cohen was joined by Charlie Puth, who played some piano and showed off his beatboxing, and Warner Music Group CEO Robert Kyncl. Kyncl acknowledged that not everyone in music is as excited about AI as Cohen seems to be: “Change is unsettling; we are in that period of change.” He proposed charting a path forward where AI enthusiasts can gain from the technology while artists who are wary of it are somehow shielded from its impacts.
Artists “will create and they will use all kinds of tools to create… that’s their job,” Kyncl said. “It’s our job, the platforms and the music industry, to make sure that artists like Charlie who lean in [to AI] benefit. It’s also our job together to make sure that artists who don’t want to lean in are protected.” He pointed to the success of YouTube’s Content ID system, which helps the platform track user-generated content, as a potential model, because creators can choose to monetize that UGC or block it depending on their preferences.
YouTube previously signaled its interest in being part of music’s AI-driven future in August when it announced an “AI Music Incubator” that will include input from Anitta, Juanes, Ryan Tedder, Rodney Jerkins, and many others.
“This group will explore, experiment and offer feedback on the AI-related musical tools and products they are researching,” Universal CEO Lucian Grainge wrote in a blog post. “Once these tools are launched, the hope is that more artists who want to participate will benefit from and enjoy this creative suite.”
At the Made on YouTube event, CEO Neal Mohan also discussed a suite of new tools for creators that aim to put “the creative power of AI into the hands of billions of people.” These include Dream Screen, which “lets you create AI-generated video or image backgrounds for Shorts by typing in an idea,” and a search function that “will act like a music concierge” when it comes time to find a track to place into a video. “Our creator can just describe her video, and if she wants she can even include information about the length or type of song she’s looking for, and Creator Music suggests the right track at the right price,” Mohan explained.
Jade Beason, a YouTube creator, told the crowd she “spend[s] a lot of time trying to find the right music for videos” and is sometimes “guilty of actually just using the same song [over again] because I just can’t find the right one.” “Music has the ability to change how your audience actually feels when they’re watching your content,” she continued. “It’s the difference between someone seeing a video of yours and laughing or crying… so the idea that we can do this easily amongst everything else is actually quite wild.”
The significance of Washington washed over me as my flight dodged the historic monuments and we descended into DCA. An interesting metaphor for the opportunities and challenges of advocating for music creators’ rights in today’s lightning round race into the future.
I have visited many times over the years to fight for the rights of songwriters on Capitol Hill. This week, songwriter members of the American Society of Composers, Authors and Publishers (ASCAP) will once again be with me in Washington for “We Write the Songs,” a performance held at the Library of Congress and co-presented by The ASCAP Foundation. Hit songwriters will play for Members of Congress and others and share the stories behind their beloved songs. As songwriters, we are also here to affirm our rights as artificial intelligence (AI) and other technologies seek to use our creations.
ASCAP is a unique entity in the music world — we are the only performance-rights organization (PRO) founded and governed by democratically elected music creators and publishers. As a membership organization, we represent nearly a million songwriters, composers, lyricists and music publishers across every genre. We are also the only U.S. PRO that operates on a not-for-profit basis so, unlike others whose profits may go elsewhere to corporate dividends and private equity investors, we put creators first in everything we do.
As the chairman of ASCAP’s board, I have seen our industry go through immense changes. When music moved from records to tapes to CDs to pirated online listening, our members descended upon Washington to ensure the rights of songwriters were respected across new platforms and listening experiences.
Emerging technologies – whether it be streaming or AI – have always presented our industry both challenges and opportunities. But in every instance, we as songwriters are often the first to feel the effects when technology outpaces the law.
During Songwriter Advocacy Day, held the day after We Write the Songs, ASCAP members – the songwriters, composers and publishers that form the soundtrack to our lives – will meet with Members of Congress and urge them to protect creators in the age of AI.
At ASCAP, we have developed six guiding principles for AI and we need Congress to act to uphold them:
Human Creators First, prioritizing rights and compensation for human creativity
Transparency, in identifying AI vs. human-generated works and retaining metadata
Consent, protecting the right for creators to decide whether their work is included in an AI training license
Compensation, making sure creators are paid fairly when their work is used in ANY way by AI, which is best accomplished in a free market, NOT with government-mandated licensing that essentially eliminates consent
Credit, when creators’ works are used in new AI-generated music
Global Consistency, an even playing field that values intellectual property across the global music and data ecosystem
While most songwriters work behind the scenes, our work has enormous value to an industry that generates $170 billion a year for the U.S. economy. But we have long been over-regulated — we are some of the most heavily controlled small business owners in the country. Roughly three-quarters of the average American songwriter’s income is subject to federal government regulations. All the while, big media and tech companies are consistently looking for ways to pay songwriters less by regulating us even more.
ASCAP has embraced new and emerging advances in technology, and we have the capacity and infrastructure to manage it at scale. But it has remained painfully clear that any new technology needs to respect existing copyright law. Music creators are concerned about the threat to their livelihood and 8 out of 10 believe A.I. companies need better regulation. Our mission at ASCAP is to help music creators navigate the future while protecting their rights and livelihoods, and enabling the type of innovation that will move the entire music industry forward.
Just because AI requires a high volume of inputs, that does not mean it cannot be licensed or deserves an exception under the law. Just as we’ve approached the streaming market, we believe the opportunities presented by AI can be realized in the free market. To do so, we need lawmakers to stand with songwriters and not give big tech and AI companies a free ride with government-mandated licenses for AI.
AI is a new challenge, but we are well positioned to meet this challenge as we always have in the face of new technologies. We are ready to help chart the path, and we look forward to sharing those insights — and breaking it down on the dance floor — with the same lawmakers whose partnership and enthusiasm has helped us to fight for the rights of songwriters as new technologies emerge.
ASCAP president and chairman of the board Paul Williams is an Oscar-, Grammy- and Golden Globe-winning composer and lyricist who has written “The Rainbow Connection,” “We’ve Only Just Begun,” and many other hits.
TikTok announced new tools to help creators label content that was generated by artificial intelligence. In addition, the company said on Tuesday (Sept. 19) that it plans to “start testing ways to label AI-generated content automatically.”
“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” the company wrote. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”
As AI technology has become better — at generating credible-looking images or mimicking pop stars’ voices, for example — and more popular, regulators have expressed increasing concern about the technology’s potential for mis-use.
In July, President Biden’s administration announced that seven leading AI companies made voluntary commitments “to help move toward safe, secure, and transparent development of AI technology.” One key point: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.”
Voluntary commitments are, of course, voluntary, which is likely why TikTok also announced that it will “begin testing an ‘AI-generated’ label that we eventually plan to apply automatically to content that we detect was edited or created with AI.” Tools to determine whether an image has been crafted by AI already exist, and some are better than others. In June, The New York Times tested five programs, finding that the “services are advancing rapidly, but at times fall short.”
The challenge is that as detection technology improves, so does the tech for evading detection. Cynthia Rudin, a computer science and engineering professor at Duke University, told the paper that “every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator. The generators are designed to be able to fool a detector.”
Similar detection efforts are being discussed in the music industry as it debates how to weigh AI-generated songs relative to tracks that incorporate human input.
“You have technologies out there in the market today that can detect an AI-generated track with 99.9% accuracy, versus a human-created track,” Believe co-founder and CEO Denis Ladegaillerie said in April. “We need to finalize the testing, we need to deploy,” he added, “but these technologies exist.”
The streaming service Deezer laid out its own plan to “develop tools to detect AI-generated content” in June. “From an economic point of view, what matters most is [regulating] the things that really go viral, and usually those are the AI-generated songs that use fake voices or copied voices without approval,” Deezer CEO Jeronimo Folgueira told Billboard this summer.
Moises, another AI-technology company, dove into the fray as well, announcing its own set of new tools on Aug. 1. “There’s definitely a lot of chatter” about this, Matt Henninger, Moises’ vp of sales and business development told Billboard. “There’s a lot of testing of different products.”
Independent musicians will have more power to negotiate with artificial intelligence developers over “fairer rates and terms for the use of their music” if a newly introduced version of the Protect Working Musicians Act passes the U.S. House, according to Rep. Deborah Ross (D-N.C.).
“AI threatens the creator — finding the person or entity that has co-opted your work and turned it into something else and then going after them is so onerous,” Ross, who sponsored the revised act and sits on the House Judiciary Committee, says in a phone interview from Washington, D.C. “That’s one of the reasons for this bill — to allow people to do this collaboratively. We need to do this sooner than later. We’re seeing this threat every single day.”
The Protect Working Musicians Act, which Rep. Ted Deutch (D-Fla.) introduced in October 2021 a few months before he left Congress, would allow indie artists to collectively bargain for royalty rates with streaming giants such as Spotify and Apple Music. As it stands, the major labels that own most worldwide master recordings have enormous negotiating power to set rates; the act would “give the smaller independent more of a voice,” says Jen Jacobsen, executive director of the Artist Rights Alliance, which worked with Ross on revising the bill.
Ross picked up the bill when Deutch announced he would not return to the House, then held hearings with indie artists in her district, which includes Raleigh. Since then, Ross says, “The AI issue has become even more important.” The revised act would allow artists to behave like plaintiffs in a class-action suit, she adds, “fighting for their rights” with a central attorney.
“Our work is being scraped and ingested and exploited without us even knowing,” Jacobsen says. “Adding the AI platforms seemed like a relevant and important thing to do.”
Writers and artists have warned for months that AI could transform their ideas into new works with no way to get paid for the usage. In April, “Heart On My Sleeve,” an AI-created song that mimics the voices of Drake and The Weeknd, landed millions of TikTok, Spotify and YouTube plays. At the time, Sting told the BBC: “The building blocks of music belong to us, to human beings. That’s going to be a battle we all have to fight in a couple of years: defending our human capital against AI.”
“Musicians are really worried about this — not just the big-name ones, but small artists, too. Small ones, especially,” Jorgensen says. “The most important thing for this bill is that small, independent artists and record labels need to be recognized and have each others’ backs.”
It’s unclear when the House might vote on the revised bill — or if it would pass. “As you can see in Congress, lots of bills aren’t passing — like the budget!” Ross says. “But this has been a very bipartisan issue in the judiciary committee. It’s the perfect time to bring these issues up.”