AI
Page: 10
President Joe Biden said Friday that new commitments by Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.
Biden announced that his administration has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.
“We must be clear eyed and vigilant about the threats emerging technologies can pose,” Biden said, adding that the companies have a “fundamental obligation” to ensure their products are safe.
“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have a lot more work to do together.”
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” said Amba Kak, executive director of the AI Now Institute. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.”
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will work closely with the Biden administration “and our bipartisan colleagues” to build upon the pledges made Friday.
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
The White House pledge notes that it mostly only applies to models that “are overall more powerful than the current industry frontier,” set by currently available models such as OpenAI’s GPT-4 and image generator DALL-E 2 and similar releases from Anthropic, Google and Amazon.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.
The pledge is heavily focused on safety risks but doesn’t address other worries about the latest AI technology, including the effect on jobs and market competition, the environmental resources required to build the models, and copyright concerns about the writings, art and other human handiwork being used to teach AI systems how to produce human-like content.
Last week, OpenAI and The Associated Press announced a deal for the AI company to license AP’s archive of news stories. The amount it will pay for that content was not disclosed.
LONDON — When the European Union announced plans to regulate artificial intelligence in 2021, legislators started focusing on “high risk” systems that could threaten human rights, such as biometric surveillance and predictive policing. Amid increasing concern among artists and rights holders about the potential impact of AI on the creative sector, however, EU legislators are also now looking at the intersection of this new technology and copyright.
The EU’s Artificial Intelligence Act, which is now being negotiated among politicians in different branches of government, is the first comprehensive legislation in the world to regulate AI. In addition to banning “intrusive and discriminatory uses” of the technology, the current version of the legislation addresses generative AI, mandating that companies disclose content that is created by AI to differentiate it from works authored by humans. Other provisions in the law would require companies that use generative AI to provide details of copyrighted works, including music, on which they trained their systems. (The AI Act is a regulation, so it would pass directly into law in all 27 member states.)
Music executives began paying closer attention to the legislation after the November launch of ChatGPT. In April, around the time that “Heart on My Sleeve,” a track that featured AI-powered imitations of vocals by Drake and The Weeknd, drove home the issue posed by AI, industry lobbyists convinced lawmakers to add the transparency provisions.
So far, big technology companies, including Alphabet, Meta and Microsoft, have publicly stated that they, too, support AI regulation, at least in the abstract. Behind the scenes, however, multiple music executives tell Billboard that technology lobbyists are trying to weaken these transparency provisions by arguing that such obligations could put European AI developers at a competitive disadvantage.
“They want codes of conduct” — as opposed to laws — “and very low forms of regulation,” says John Phelan, director general of international music publishing trade association ICMP.
Another argument is that summarizing training data “would basically come down to providing a summary of half, or even the entire, internet,” says Boniface de Champris, Brussels-based policy manager at the Computer and Communications Industry Association Europe, which counts Alphabet, Apple, Amazon and Meta among its members. “Europe’s existing copyright rules already cover AI applications sufficiently.”
In May, Sam Altman, CEO of ChatGPT developer OpenAI, emerged as the highest-profile critic of the EU’s proposals, accusing it of “overregulating” the nascent business. He even said that his company, which is backed by Microsoft, might consider leaving Europe if it could not comply with the legislation, although he walked back this statement a few days later. OpenAI and other companies lobbied — successfully — to have an early draft of the legislation changed so that “general-purpose AI systems” like ChatGPT would no longer be considered high risk and thus subject to stricter rules, according to documents Time magazine obtained from the European Commission. (OpenAI didn’t respond to Billboard’s requests for comment.)
The lobbying over AI echoes some of the other political conflicts between media and technology companies — especially the one over the EU Copyright Directive, which passed in 2019. While that “was framed as YouTube versus the music industry, the narrative has now switched to AI,” says Sophie Goossens, a partner at global law firm Reed Smith. “But the argument from rights holders is much the same: They want to stop tech companies from making a living on the backs of their content.”
Several of the provisions in the Copyright Directive deal with AI, including an exception in the law for text- and data-mining of copyrighted content, such as music, in certain cases. Another exception allows scientific and research institutions to engage in text- and data-mining on works to which they have lawful access.
So far, the debate around generative AI in the United States has focused on whether performers can use state laws on right of publicity to protect their distinctive voices and images — the so-called “output side” of generative AI. In contrast, both the Copyright Directive and the AI Act address the “input side,” meaning ways that rights holders can either stop AI systems from using their content for training purposes or limit which ones can in order to license that right.
Another source of tension created by the Copyright Directive is the potential for blurred boundaries between research institutions and commercial businesses. Microsoft, for example, refers to its Muzic venture as “a research project on AI music,” while Google regularly partners with independent research, academic and scientific bodies on technology developments, including AI. To close potential loopholes, Phelan wants lawmakers to strengthen the bill’s transparency provisions, requiring specific details of all music accessed for training, instead of the “summary” that’s currently called for. IFPI, the global recorded-music trade organization, regards the transparency provisions as “a meaningful step in the right direction,” according to Lodovico Benvenuti, managing director of its European office, and he says he hopes lawmakers won’t water that down.
The effects of the AI Act will be felt far outside Europe, partly because they will apply to any company that does business in the 27-country bloc and partly because it will be the first comprehensive set of rules on the use of the technology. In the United States, the Biden administration has met with technology executives to discuss AI but has yet to lay out a legislation strategy. On June 22, Senate Majority Leader Chuck Schumer, D-N.Y., said that he was working on “exceedingly ambitious” bipartisan legislation on the topic, but political divides in the United States as the next presidential election approaches would make passage difficult. China unveiled its own draft laws in April, although other governments may be reluctant to look at legislation there as a model.
“The rest of the world is looking at the EU because they are leading the way in terms of how to regulate AI,” says Goossens. “This will be a benchmark.”
Universal Music Group general counsel/executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright on Wednesday (July 12) to represent the music industry. In his remarks, the executive called for a “federal right of publicity” — the state-by-state right that protects artists’ likenesses, names, and voices — as well as for “visibility into AI training data” and for “AI-generated content to be labeled as such.”
Harleston was joined by other witnesses including Karla Ortiz, a conceptual artist and illustrator who is waging a class action lawsuit against Stability AI; Matthew Sag, professor of artificial intelligence at Emory University School of Law; Dana Rao, executive vp/general counsel at Adobe; and Ben Brooks, head of public policy at Stability AI.
“I’d like to make four key points to you today,” Harleston began. “First, copyright, artists, and human creativity must be protected. Art and human creativity are central to our identity.” He clarified that AI is not necessarily always an enemy to artists, and can be used in “service” to them as well. “If I leave you with one message today, it is this: AI in the service of artists and creativity can be a very, very good thing. But AI that uses, or, worse yet, appropriates the work of these artists and creators and their creative expression, their name, their image, their likeness, their voice, without authorization, without consent, simply is not a good thing,” he said.
Second, he noted the challenges that generative AI poses to copyright. In written testimony, he noted the concern of “AI-generated music being used to generate fraudulent plays on streaming services, siphoning income from human creators.” And while testifying at the hearing, he added, “At Universal, we are the stewards of tens of thousands, if not hundreds of thousands, of copyrighted creative works from our songwriters and artists, and they’ve entrusted us to honor, value and protect them. Today, they are being used to train generative AI systems without authorization. This irresponsible AI is violative of copyright law and completely unnecessary.”
Training is one of the most contentious areas of generative AI for the music industry. In order to get an AI model to learn how to generate a human voice, a drum beat or lyrics, the AI model will train itself on up to billions of data points. Often this data contains copyrighted material, like sound recordings, without the owner’s knowledge or compensation. And while many believe this should be considered a form of copyright infringement, the legality of using copyrighted works as training data is still being determined in the United States and other countries.
The topic is also the source of Ortiz’s class action lawsuit against Stability AI. Her complaint, filed in California federal court along with two other visual artists, alleges that the “new” images generated by Stability AI’s Stable Diffusion model used their art “without the consent of the artists and without compensating any of those artists,” which they feel makes any resulting generation from the AI model a “derivative work.”
In his spoken testimony, Harleston pointed to today’s “robust digital marketplace” — including social media sites, apps and more — in which “thousands of responsible companies properly obtained the rights they need to operate. There is no reason that the same rules should not apply equally to AI companies.”
Third, he reiterated that “AI can be used responsibly…just like other technologies before.” Among his examples of positive uses of AI, he pointed to Lee Hyun [aka MIDNATT], a K-pop artist distributed by UMG who used generative AI to simultaneously release the same single in six languages using his voice on the same day. “The generative AI tool extended the artist’s creative intent and expression with his consent to new markets and fans instantly,” Harleston said. “In this case, consent is the key,” he continued, echoing Ortiz’s complaint.
While making his final point, Harleston urged Congress to act in several ways — including by enacting a federal right of publicity. Currently, rights of publicity vary widely state by state, and many states’ versions include limitations, including less protection for some artists after their deaths.
The shortcomings of this state-by-state system were highlighted when an anonymous internet user called Ghostwriter posted a song — apparently using AI to mimic the voices of Drake and The Weeknd –called “Heart On My Sleeve.” The track’s uncanny rendering of the two major stars immediately went viral, urging the music business to confront the new, fast-developing concern of AI voice impersonation.
A month later, sources told Billboard that the three major label groups — UMG, Warner Music Group and Sony Music — have been in talks with the big music streaming services to allow them to cite “right of publicity” violations as a reason to take down songs with AI vocals. Removing songs based on right of publicity violations is not required by law, so the streamers’ reception to the idea appears to be voluntary.
“Deep fakes, and/or unauthorized recordings or visuals of artists generated by AI, can lead to consumer confusion, unfair competition against the artists that actually were the original creator, market dilution and damage to the artists’ reputation or potentially irreparably harming their career. An artist’s voice is often the most valuable part of their livelihood and public persona. And to steal it, no matter the means, is wrong,” said Harleston.
In his written testimony, Harleston went deeper, stating UMG’s position that “AI generated, mimicked vocals trained on vocal recordings from our copyrighted recordings go beyond Right of Publicity violations… copyright law has clearly been violated.” Many AI voice uses circulating the internet involve users mashing up one previously released song topped with a different artist’s voice. These types of uses, Harleston wrote, mean “there are likely multiple infringements occurring.”
Harleston added that “visibility into AI training data is also needed. If the data on AI training is not transparent, the potential for a healthy marketplace will be stymied as information on infringing content will be largely inaccessible to individual creators.”
Another witness at the hearing raised the idea of an “opt-out” system so that artists who do not wish to be part of an AI’s training data set will have the option of removing themselves. Already, Spawning, a music-tech start-up, has launched a website to put this possible remedy into practice for visual art. Called “HaveIBeenTrained.com,’ the service helps creators opt-out of training data sets commonly used by an array of AI companies, including Stability AI, which previously agreed to honor the HaveIBeenTrained.com opt-outs.
Harleston, however, said he did not believe opt-outs are enough. “It will be hard to opt out if you don’t know what’s been opted in,” he said. Spawning co-founder Mat Dryhurst previously told Billboard that HaveIBeenTrained.com is working on an opt-in tool, though this product has yet to be released.
Finally, Harleston urged Congress to label AI-generated content. “Consumers deserve to know exactly what they’re getting,” he said.
LONDON — Amid increasing concern among artists, songwriters, record labels and publishers over the impact of artificial intelligence (AI) on the music industry, European regulators are finalizing sweeping new laws that will help determine what AI companies can and cannot do with copyrighted music works.
On Wednesday (June 14), Members of the European Parliament (MEPs) voted overwhelmingly in favor of the Artificial Intelligence (AI) Act with 499 votes for, 28 against and 93 abstentions. The draft legislation, which was first proposed in April 2021 and covers a wide range of AI applications, including its use in the music industry, will now go before the European Parliament, European Commission and the European Council for review and possible amendments ahead of its planned adoption by the end of the year.
For music rightsholders, the European Union’s (EU) AI Act is the world’s first legal framework for regulating AI technology in the record business and comes as other countries, including the United States, China and the United Kingdom, explore their own paths to policing the rapidly evolving AI sector.
The EU proposals state that generative AI systems will be forced to disclose any content that they produce which is AI-generated — helping distinguish deep-fake content from the real thing — and provide detailed publicly available summaries of any copyright-protected music or data that they have used for training purposes.
“The AI Act will set the tone worldwide in the development and governance of artificial intelligence,” MEP and co-rapporteur Dragos Tudorache said following Wednesday’s vote. The EU legislation would ensure that AI technology “evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law,” he added.
The EU’s AI Act arrives as the music business is urgently trying to respond to recent advances in the technology. The issue came to a head in April with the release of “Heart on My Sleeve,” the now-infamous song uploaded to TikTok that is said to have been created using AI to imitate vocals from Drake and The Weeknd. The song was quickly pulled from streaming services following a request from Universal Music Group, which represents both artists, but not before it had racked up hundreds of thousands of streams.
A few days before “Heart on My Sleeve” become a short-lived viral hit, UMG wrote to streaming services, including Spotify and Apple Music, asking them to stop AI companies from accessing the label’s copyrighted songs “without obtaining the required consents” to “train” their machines. The Recording Industry Association of America (RIAA) has also warned against AI companies violating copyrights by using existing music to generate new tunes.
If the EU’s AI Act passes in its present draft form, it will strengthen supplementary protections against the unlawful use of music in training AI systems. Existing European laws dealing with text and data-mining copyright exceptions mean that rightsholders will still technically need to opt out of those exceptions if they want to ensure their music is not used by AI companies that are either operating or accessible in the European Union.
The AI Act would not undo or change any of the copyright protections currently provided under EU law, including the Copyright Directive, which came into force in 2019 and effectively ended safe harbor provisions for digital platforms in Europe.
That means that if an AI company were to use copyright-protected songs for training purposes — and publicly declare the material it had used as required by the AI Act — it would still be subject to infringement claims for any AI-generated content it then tried to commercially release, including infringement of the copyright, legal, personality and data rights of artists and rightsholders.
“What cannot, is not, and will not be tolerated anywhere is infringement of songwriters’ and composers’ rights,” said John Phelan, director general of international music publishing trade association ICMP, in a statement. The AI Act, he says, will ensure “special attention for intellectual property rights” but further improvements to the legislation “are there to be won.”
ASCAP has announced a number of new initiatives designed to help its members protect their copyrights and plan for the future of artificial intelligence.
The events start later this month, on June 21, with The ASCAP Experience, the performing rights’ organization’s annual event. As part of its full day of programming, ASCAP will include the panel Intelligently Navigating Artificial Intelligence, created to act as a state of the union for music AI, covering both how the technology could change music creation and consumption and how ASCAP is working to address these changes. Panelists include Lucas Cantor (composer, ASCAP Member), Rachel Lyske (CEO of DAACI) and Nicholas Lehman (Chief strategy and digital officer, ASCAP).
On July 19, ASCAP will host a half-day AI Symposium in New York City to dive even deeper into challenges and opportunities. Details of the event’s speakers will be announced soon.
In addition to the summer’s educational initiatives, the PRO is continuing ASCAP Lab Challenge, a fund started in 2019 to power innovation in music. In the past, ASCAP Lab has run an accelerator program in partnership with the NYC Media Lab, led by the NYU Tandon School of Engineering, exploring new technologies such as the metaverse, augmented reality, spatial audio and computer vision and helping to incubate more than a dozen startups and university research projects.
This year, ASCAP Lab Challenge is focusing on AI, funding five startups that are looking to change the music industry in a positive way. As part of its investment, ASCAP will work alongside these recipients for 12 weeks to guide the development of their products and ensure they can benefit music creators.
Included in the 2023 Lab Challenge class is DAACI, a generative AI model that composes and arranges scores, particularly for video games and virtual reality; Infinite Album, a generative AI company that creates copyright-safe video game music that can react to game play in real time; Overture Games, a company which creates video games designed to encourage musicians to practice and avoid burnout using AI-powered pitch detection and visual feedback, gamifying the music education experience; Samplifi, which isolates auditory information for the benefit of hearing-impaired musicians; and Sounds.Studio, created by Never Before Heard Sounds, a browser-based music production platform that uses AI to help musicians create faster with tools like stem splitting, vocal conversion, timbre transfer and more.
Sony Music Group chairman Rob Stringer said on Tuesday (May 23) that the company is focused on the fight against low-quality content — which he called ”the lowest common denominator” — flooding top streaming platforms. “We have to look after the premium quality artists at the top of our business,” Stringer said during a company-wide […]
What if we had the power to bring back the dead? As far as recordings are concerned, we might be getting pretty close.
The viral success of the so-called “Fake Drake” track “Heart on My Sleeve,” which apparently employed AI technology to create realistic renderings of vocals from Drake and The Weeknd without their knowledge, has raised the possibility that perhaps any voice can now be imitated by AI, even artists who died decades ago.
Last week, producer Timbaland did just that. “I always wanted to work with BIG, and I never got a chance to. Until today…” he said in an Instagram Reel, pressing play on an unreleased song clip that sounds like Notorious BIG, rapping on top of a Timbaland beat, despite the fact that the rapper was murdered in a drive-by shooting 25 years prior. (A representative for Timbaland did not respond to Billboard’s request for comment. A representative for Notorious BIG’s estate declined to comment).
But this is not the first time a deceased stars’ voice has been resurrected with AI. The HYBE-owned AI voice synthesis company Supertone recreated the voice of late-South Korean folk artist Kim Kwang-seok last year, and in November, Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui. To see more even examples of this technology applied to late American singers, take a few minutes on TikTok, searching phrases like “Kurt Cobain AI cover” or “XXXTentacion AI voice.”
Some artists – like Grimes and Holly Herndon – have embraced the idea of this vocal recreation technology, finding innovative ways to grant fans access to their voices while maintaining some control through their own AI models, but other artists are showing signs that they will resist this, fearing that the technology could lead to confusion over which songs they actually recorded. There is also fear that fans will put words into artists’ mouths, making them voice phrases and opinions that they would never say IRL. Even Grimes admitted on Twitter there is the possibility that people will use her voice to say “rly rly toxic lyrics” or “nazi stuff” – and said she’d take those songs down.
In the case of artists like Notorious BIG or Kurt Cobain, who both passed away when the internet was still something you had to dial-up, it’s impossible to know where they might stand on this next-gen technology. Still, their voices are being resurrected through AI, and it seems these vocals are getting more realistic by the day.
It calls to mind the uncanny valley nature of the Tupac hologram which debuted at Coachella in 2012, or even the proliferation of posthumous albums in more recent years, which are especially common to see from artists who passed away suddenly at a young age, like Juice WRLD, Lil Peep, and Mac Miller.
Tyler, the Creator has voiced what many others have felt about the posthumous album trend. At an April 26 concert in Los Angeles, he noted that he’s written it into his will that he does not want any unreleased music put out after his death. “That’s f-cking gross,” he said. “Like, half-ass ideas and some random feature on it…like no.” It remains unclear if Tyler’s dying wishes would be honored when that time comes, however. Labels often own every song recorded during the term of their contract with an artist, so there is financial incentive for labels to release those unheard records.
Some who look at this optimistically liken the ability to render an artists’ voice onto a cover or original track as an emerging, novel form of fan engagement, similar to remixing, sampling, or even writing fan fiction. Similar to where this new technology seems to be headed, remixes and samples also both started as unsanctioned creations. Those reworkings were often less about making songs that would go toe-to-toe with the original artists’ catalog on the Billboard charts than it was about creativity and playfulness. Of course, there were plenty of legal issues that came along with the emergence of both remixing and sampling.
The legality of bringing artists’ voices back from the grave specifically is also still somewhat unclear. A celebrity’s voice may be covered by “right of publicity” laws which can protect them from having their voices commercially exploited without authorization. However, publicity rights post-mortem can be limited. “There’s no federal rights of publicity statute, just a hodgepodge of different state laws,” says Josh Love, partner at Reed Smith. He explains that depending on where the artist was domiciled at the time of their death, their estate may not possess any rights of publicity, but in states like California, there can be strong protection after death.
Another potential safeguard is the Lanham Act – which prohibits the use of any symbol or device that is likely to deceive consumers about the association, sponsorship, or approval of goods and services — though it may be less of a potent argument post-mortem. But most cases in which rights of publicity or the Lanham Act were used to protect a musician’s voice – like Tom Waits v. Frito Lay and Bette Midler v. Ford – were clear examples of voices being appropriated for commercial use. Creative works, like songs, are much more likely to be deemed a protected form of free speech.
Some believe this could be a particularly interesting new path for reviving older catalogs, especially when the artist is not alive to take part in any more promotion, for the estates and rights holders who control the artists’ likeness. As Zach Katz, president and COO of FaZe Clan and former president of BMG US, put it in a recent press release for voice mapping service Covers.ai: “AI will open a new, great opportunity for more legacy artists and catalogs to have their ‘Kate Bush’ or “Fleetwood Mac’ moment,” he said. “We are living in a remix culture and the whole fan-music movement is overdue to arrive in the industry.”
Though Covers.ai, created by start-up MAYK, was only just released to the public today, May 10, the company announced that it already amassed over 100,000 sign ups for the service leading up to its launch, proving that there is a strong appetite for this technology. With Covers.ai, users can upload original songs and map someone else’s voice on top of it, and the company says it is working to partner with the music business to license and pay for these voices. Its co-founder and CEO, Stefan Heinrich, says this idea is especially popular so far with Gen Z and Gen Alpha, “the product we’re building here is really made for the next generation, the one coming up.”
Between Supertone, Lingyin Engine, Covers.ai, and others competitors like Uberduck coming into the marketplace, it seems the popularization of these AI voice synthesizers is inevitable (albeit legally uncertain) but playing with dead artists’ voices adds another layer of moral complexity to the discussion: is this more akin to paying respects or grave robbing?
MAYK’s artificial intelligence-powered voice recreation tool officially launched to all users today (May 10).
Covers.ai lets users upload their own original songs and then try on other AI-voices on top of it, including the voices of Billboard Hot 100-charting talent. According to a company press release, Covers.ai’s tool topped over 100,000 sign-ups prior to its launch.
Its founder and CEO, Stefan Heinrich — an entrepreneur who previously worked in high-ranking positions for Cameo, TikTok, Musical.ly and YouTube — explains that, for now, most of the models available for users to work with are “community models.”
“This is open source,” he explains. “There are users that make these models with various celebrity voices out in the wild, and those can be uploaded and marked at ‘community models’ on our site. At the same time, we are working with artist teams to license the voices of specific talent so we can find a way to compensate them for their official use.”
Eventually, Heinrich says he also hopes to find a way to license song catalogs from rights holders so that users can mix and match tracks with various artists’ voices they find on the site. Through these licensing agreements, he hopes to find a way to create a new revenue stream for talent, but to date, these licenses have not yet been finalized.
MAYK is backed by notable music investors including Zach Katz (president/COO of FaZe Clan, former president of BMG US), Matt Pincus (co-founder and CEO of MUSIC), Jon Vlassopulos (CEO of Napster, former global head of music at Roblox), Mohnish Sani (principle, content acquisition, Amazon Music) and more.
The launch arrives as conversations around AI and vocal deepfakes are at a fever pitch. Just last month, an unknown songwriter called Ghostwriter went viral for creating a song called “Heart on My Sleeve” using supposed AI-renderings of Drake and The Weeknd’s voices without their knowledge. Soon after, Grimes responded to the news by launching her own AI voice model to let users freely use her voice to create music.
In just a few minutes of searching, it’s apparent that TikTok is already flooded with songs with AI-vocals, whether they are original songs employing the voices of famous talent, like “Heart on My Sleeve,” or mashing up one well-known song with the voice of a different artist.
This AI vocal technology raises legal questions, however.
Mimicking vocals may be a violation an artist’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Past landmark cases — like Tom Waits v. Frito Lay and Bette Midler v. Ford Motor Company — have established that soundalikes of famous voices cannot be employed without their consent to sell products, but the precedent is less clear when it comes to creative expressions like songs, which are much more likely to be deemed a protected form of free speech.
Heinrich hopes that Covers.ai can help “democratize creativity” and make it far more “playful” in an effort to get music fans from the lean-back forms of music discovery, like listening to radio or a pre-programmed editorial playlist, to a more engaged, interactive experience. “I think what music is really changing right now,” he says, noting that Covers.ai’s earliest adopters are mostly Gen Z and Gen Alpha. “The product we’re building here is really made for the next generation, the one coming up.”
As the music industry grapples with the far-reaching implications of artificial intelligence, Warner Music Group CEO Robert Kyncl is being mindful of the opportunities it will create. “Framing it only as a threat is inaccurate,” he said on Tuesday (May 9) during the earnings call for the company’s second fiscal quarter ended March 31.
Kyncl’s tenure as chief business officer at YouTube informs his viewpoint on AI’s potential to contribute to the music industry’s growth. “When I arrived [at YouTube] in 2010, we were fighting many lawsuits around the world and were generating low tens of millions of dollars from [user-generated content],” he continued. “We turned that liability into a billion-dollar opportunity in a handful of years and multibillion-dollar revenue stream over time. In 2022, YouTube announced that it paid out over $2 billion from UGC to music rightsholders alone and far more across all content industries.”
Not that AI doesn’t pose challenges for owners of intellectual property. A wave of high-profile AI-generated songs — such as the “fake Drake”/The Weeknd track, “Heart on My Sleeve,” by an anonymous producer under the name Ghostwriter — has revealed how off-the-shelf generative AI technologies can easily replicate the sound and style of popular artists without their consent.
“Our first priority is to vigorously enforce our copyrights and our rights in name, image, likeness, and voice, to defend the originality of our artists and songwriters,” said Kyncl, echoing comments by Universal Music Group CEO Lucian Grainge in a letter sent to Spotify and other music streaming platforms in March. In that letter, Grainge said UMG “would not hesitate to take steps to protect our rights and those of our artists” against AI companies that use its intellectual property to “train” their AI.
“It is crucial that any AI generative platform discloses what their AI is trained on and this must happen all around the world,” Kyncl said on Tuesday. He pointed to the EU Artificial Intelligence Act — a proposed law that would establish government oversight and transparency requirements for AI systems — and efforts by U.S. Sen. Chuck Schumer in April to build “a flexible and resilient AI policy framework” to impose guardrails while allowing for innovation.
“I can promise you that whenever and wherever there is a legislative initiative on AI, we will be there in force to ensure that protection of intellectual property is high on the agenda,” Kyncl continued.
Kyncl went on to note that technological problems also require technological solutions. AI companies and distribution platforms can manage the proliferation of AI music by building new technologies for “identifying and tracking of content on consumption platforms that can appropriately identify copyright and remunerate copyright holders,” he continued.
Again, Kyncl’s employment at YouTube comes into play here. Prior to his arrival, the platform built a proprietary digital fingerprinting system, Content ID, to manage and monetize copyrighted material. In fact, one of Kyncl’s first hires as CEO of WMG, president of technology Ariel Bardin, is a former YouTube vp of product management who oversaw Content ID.
Labels are also attempting to rein in AI content by adopting “user-centric” royalty payment models that reward authentic, human-created recordings over mass-produced imitations. During UMG’s first quarter earnings call on April 26, Grainge said that “with the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of noise … eliminating unauthorized, unwanted and infringing content entirely.” WMG adopted user-centric (i.e. “fan-powered”) royalties on SoundCloud in 2022.
Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.
Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.
There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.