The RIAA Wants AI Voice Cloning Sites Added to the Government’s Piracy Watchlist
Written by djfrosty on October 11, 2023
The RIAA has asked to have AI voice cloning added to the government’s piracy watch list, officially known as the Review of Notorious Markets for Counterfeiting and Piracy.
The RIAA typically writes in each year, requesting forms of piracy like torrenting, stream ripping, cyber lockers and free music downloading to be included in the final list. All of these categories of piracy are still present in the RIAA’s letter to the U.S. Trade Representative this year, but this is the first time the trade organization, which represents the interest of record labels, has added a form of generative AI to their recommendations.
The RIAA noted that it believes AI voice cloning, also referred to as ‘AI voice synthesis’ or ‘AI voice filters,’ infringes on their members’ copyrights and the artists’ rights to their voices and calls out one U.S.-based AI voice cloning site, Voicify.AI as one that should specifically face scrutiny.
According to the letter, Voicify.AI’s service includes voice models that emulate sound recording artists like Michael Jackson, Justin Bieber, Ariana Grande, Taylor Swift, Elvis Presley, Bruno Mars, Eminem, Harry Styles, Adele, Ed Sheeran, and others, as well as political figures including Donald Trump, Joe Biden, and Barak Obama.
The RIAA claims that this type of service infringes on copyrights because it “stream-rips the YouTube video selected by the user, copies the acapella from the track, modifies the acapella using the AI vocal model, and then provides the user unauthorized copies of the modified acapella stem, the underlying instrumental bed, and the modified remixed recording.” Essentially, some of these AI voice cloning sites train its models on stolen copyrights.
It additionally claims that there is a violation pf the artists’ right of publicity, the right that protects public figures from having their name, likeness, and voice commercially exploited without their permission. This is a more tenuous right, given it is only a state-level protection and its strength varies by state. It also becomes more limited after a public figure’s death. However, this is possibly the most common legal argument against AI voice cloning technology in the music business.
This form of artificial intelligence first became widely recognized last spring, when an anonymous TikTok user named Ghostwriter used AI to mimic the voices of Drake and The Weeknd in his song “Heart On My Sleeve” with shocking precision. The song was briefly available on streaming services, like YouTube, but was taken down after a stern letter from the artists’ label, Universal Music Group. However, the song was ultimately removed from official services due to a copyright infringement in the track, not because of a right of publicity claim.
A few months later, Billboard reported that streamers were in talks with the three major label groups about allowing them to file take down requests for right of publicity violations — something which previously was only allowed in cases of copyright infringement as dictated in the Digital Millennium Copyright Act (DMCA). Unlike the DMCA, the newly discussed arrangement regarding right of publicity issues would be a voluntary one. In July, UMG’s general counsel and executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright and asked for a new “federal right of publicity” to be made into law to protect artists’ voices.
An additional challenge in regulating this area is that many AI models available on the internet for global users are not based in the U.S., meaning the U.S. government has little recourse to stop their alleged piracy, even if alerted by trade organizations like the RIAA. Certain countries are known to be more relaxed on AI regulation — like China, Israel, South Korea, Japan, and Singapore — which has created safe havens for AI companies to grow abroad.
The U.S. Trade Representative still must review this letter from the RIAA as well as other recommendations from other industry groups and determine whether or not they believe AI voice cloning should be included on the watchlist. The office will likely issue their final review at the start of next year.