artificial intelligence
Page: 12
Most conversations around AI in music are focused on music creation, protecting artists and rightsholders, and differentiating human-made music from machine-made works. And there is still discourse to be had as AI has some hidden superpowers waiting to be explored. One use for the technology that has immense potential to positively impact artists is music marketing.
As generative and complementary AI is becoming a larger part of creative works in music, marketing will play a larger role than ever before. Music marketing isn’t just about reaching new and existing fans and promoting upcoming singles. Today, music marketing must establish an artist’s ownership of their work and ensure that the human creatives involved are known, recognized, and appreciated. We’re about to see the golden age of automation for artists who want to make these connections and gain this appreciation.
While marketing is a prerequisite to a creator’s success, it takes a lot of time, energy, and resources. Creating engaging content takes time. According to Linktree’s 2023 Creator Report, 48% of creators who make $100-500k per year spend more than 10 hours on content creation every week. On top of that, three out of four creators want to diversify what they create but feel pressure to keep making what is rewarded by the algorithm. Rather than fighting the impossible battle of constantly evolving and cranking out more content to match what the algorithm is boosting this week, creatives can have a much greater impact by focusing on their brand and making high-quality content for their audience.
For indie artists without support from labels and dedicated promotion teams, the constant pressure to push their new single on TikTok, post on Instagram, and engage with fans while finding the time to make new music is overwhelming. The pressure is only building, thanks to changes in streaming payouts. Indie artists need to reach escape velocity faster.
Megh Vakharia
AI-powered music marketing can lighten that lift–generating campaign templates and delivering to artists the data they need to reach their intended audience. AI can take the data that artists and creators generate and put it to work in a meaningful way, automatically extracting insights from the information and analytics to build marketing campaigns and map out tactics that get results.
AI-driven campaigns can give creators back the time they need to do what they do best: create. While artificial intelligence saves artists time and generates actionable solutions for music promotion, it is still highly dependent on the artist’s input and human touch. Just as a flight captain has to set route information and parameters before switching on autopilot, an artist enters their content, ideas, intended audience, and hopeful outcome of the marketing campaign. Then, using this information, the AI-powered marketing platform can provide all of the data and suggestions necessary to produce the targeted results.
Rather than taking over the creative process, AI should be used to assist and empower artists to be more creative. It can help put the joy back into what can be a truly fun process — finding, reaching, and engaging with fans.
A large portion of artists who have tapped into AI marketing have never spent money on marketing before, but with the help of these emerging tools, planning and executing effective campaigns is more approachable and intuitive. As the music industry learns more about artificial intelligence and debates its ethical implications in music creation, equal thought must be given to the opportunities that it unlocks for artists to grow their fanbases, fuel more sustainable careers, and promote their human-made work.
Megh Vakharia is the co-founder and CEO of SymphonyOS, the AI-powered marketing platform empowering creatives to build successful marketing campaigns that generate fan growth using its suite of smart, automated marketing tools.
Earlier this month, 760 stations owned by iHeartMedia simultaneously threw their weight behind a new single: The Beatles’ “Now and Then.” This was surprising, because the group broke up in 1970 and two of the members are dead. “Now and Then” began decades ago as a home recording by John Lennon; more recently, AI-powered audio technology allowed for the separation of the demo’s audio components — isolating the voice and the piano — which in turn enabled the living Beatles to construct a whole track around them and roll it out to great fanfare.
“For three days, if you were a follower of popular culture, all you heard about was The Beatles,” says Arron Saxe, who represents several estates, including Otis Redding’s and Bill Withers’s. “And that’s great for the business of the estate of John Lennon and the estate of George Harrison and the current status of the two living legends.”
For many people, 2023 has been the year that artificial intelligence technology left the realm of science fiction and crashed rudely into daily life. And while AI-powered tools have the potential to impact wide swathes of the music industry, they are especially intriguing for those who manage estates or the catalogs of dead artists.
That’s because there are inherent constraints involved with this work: No one is around to make new stuff. But as AI models get better, they have the capacity to knit old materials together into something that can credibly pass as new — a reproduction of a star’s voice, for example. “As AI develops, it may impact the value of an estate, depending on what assets are already in the estate and can be worked with,” says Natalia Nataskin, chief content officer for Primary Wave, who estimates that she and her team probably spend around 25% of their time per week mulling AI (time she says they used to spend contemplating possibilities for NFTs).
And a crucial part of an estate manager’s job, Saxe notes, is “looking for opportunities to earn revenue.” “Especially with my clients who aren’t here,” he adds, “you’re trying to figure out, how do you keep it going forward?”
The answer, according to half a dozen executives who work with estates or catalogs of dead artists or songwriters, is “very carefully.” “We say no to 99 percent of opportunities,” Saxe says.
“You have this legacy that is very valuable, and once you start screwing with it, you open yourself up to causing some real damage,” adds Jeff Jampol, who handles the estates of The Doors, Janis Joplin and more. “Every time you’re going to do something, you have to be really protective. It’s hard to be on the bleeding edge.”
To work through these complicated issues, WME went so far as to establish an AI Task Force where agents from every division educate themselves on different platforms and tools to “get a sense for what is out there and where there are advantages to bring to our clients,” says Chris Jacquemin, the company’s head of digital strategy. The task force also works with WME’s legal department to gain “some clarity around the types of protections we need to be thinking about,” he continues, as well as with the agency’s legislative division in Washington, D.C.
At the moment, Jampol sees two potentially intriguing uses of AI in his work. “It would be very interesting to have, for instance, Jim Morrison narrate his own documentary,” he explains. He could also imagine using an AI voice model to read Morrison’s unrecorded poetry. (The Doors singer did record some poems during his lifetime, suggesting he was comfortable with this activity.)
On Nov. 15, Warner Music Group announced a potentially similar initiative, partnering with the French great Edith Piaf’s estate to create a voice model — based on the singer’s old interviews — which will narrate the animated film Edith. The executors of Piaf’s estate, Catherine Glavas and Christie Laume, said in a statement that “it’s been a special and touching experience to be able to hear Edith’s voice once again — the technology has made it feel like we were back in the room with her.”
The use of AI tech to recreate a star’s speaking voice is “easier” than attempting to put together an AI model that will replicate a star singing, according to Nataskin. “We can train a model on only the assets that we own — on the speaking voice from film clips, for example,” she explains.
In contrast, to train an AI model to sing like a star of old, the model needs to ingest a number of the artist’s recordings. That requires the consent of other rights holders — the owners of those recordings, which may or may not be the estate, as well as anyone involved in their composition. Many who spoke to Billboard for this story said they were leery of AI making new songs in the name of bygone legends. “To take a new creation and say that it came from someone who isn’t around to approve it, that seems to me like quite a stretch,” says Mary Megan Peer, CEO of the publisher peermusic.
Outside the United States, however, the appetite for this kind of experimentation may differ. Roughly a year ago, the Chinese company Tencent Music Entertainment told analysts that it used AI-powered technology to create new vocal tracks from dead singers, one of which went on to earn more than 100 million streams.
For now, at least, Nataskin characterized Primary Wave as focused on “enhancing” with AI tech, “rather than creating something from scratch.” And after Paul McCartney initially mentioned that artificial intelligence played a role in “Now and Then,” he quickly clarified on X that “nothing has been artificially or synthetically created,” suggesting there is still some stigma around the use of AI to generate new vocals from dead icons. The tech just “cleaned up some existing recordings,” McCartney noted.
This kind of AI use for “enhancing” and “cleaning up,” tweaking and adjusting has already been happening regularly for several years. “For all of the industry freakout about AI, there’s actually all these ways that it’s already operating everyday on behalf of artists or labels that isn’t controversial,” says Jessica Powell, co-founder and CEO of Audioshake, a company that uses AI-powered technology for stem separation. “It can be pretty transformational to be able to open up back catalog for new uses.”
The publishing company peermusic used AI-powered stem separation to create instrumentals for two tracks in its catalog — Gaby Moreno’s “Fronteras” and Rafael Solano’s “Por Amor” — which could then be placed in ads for Oreo and Don Julio, respectively. Much like the Beatles, Łukasz Wojciechowski, co-founder of Astigmatic Records, used stem separation to isolate, and then remove distortion from, the trumpet part in a previously unreleased recording he found of jazz musician Tomasz Stanko. After the clean up, the music could be released for the first time. “I’m seeing a lot of instances with older music where the quality is really poor, and you can restore it,” Wojciechowski says.
Powell acknowledges that these uses are “not a wild proposition like, ‘create a new voice for artist X!’” Those have been few and far between — at least the authorized ones. (Hip-hop fans have been using AI-powered technology to turn snippets of rap leaks from artists like Juice WRLD, who died in 2019, into “finished” songs.) For now, Saxe believes “there hasn’t been that thing where people can look at it and go, ‘They nailed that use of it.’ We haven’t had that breakout commercial popular culture moment.”
It’s still early, though. “Where we go with things like Peter Tosh or Waylon Jennings or Eartha Kitt, we haven’t decided yet,” says Phil Sandhaus, head of WME Legends division. “Do we want to use voice cloning technologies out there to create new works and have Eartha Kitt in her unique voice sing a brand new song she’s never sung before? Who knows? Every family, every estate is different.”
Additional reporting by Melinda Newman
Listeners remain wary of artificial intelligence, according to Engaging with Music 2023, a forthcoming report from the International Federation of the Phonographic Industry (IFPI) that seems aimed in particular at government regulators.
The IFPI surveyed 43,000 people across 26 countries, coming to the conclusion that 76% of respondents “feel that an artist’s music or vocals should not be used or ingested by AI without permission,” and 74% believe “AI should not be used to clone or impersonate artists without authorisation.”
The results are not surprising. Most listeners probably weren’t thinking much, if at all, about AI and its potential impacts on music before 2023. (Some still aren’t thinking about it: 89% of those surveyed said they were “aware of AI,” leaving 11% who have somehow managed to avoid a massive amount of press coverage this year.) New technologies are often treated with caution outside the tech industry.
It’s also easy for survey respondents to support statements about getting authorization for something before doing it — that generally seems like the right thing to do. But historically, artists haven’t always been interested in preemptively obtaining permission.
Take the act of sampling another song to create a new composition. Many listeners would presumably agree that artists should go through the process of clearing a sample before using it. In reality, however, many artists sample first and clear later, sometimes only if they are forced to.
In a statement, Frances Moore, IFPI’s CEO, said that the organization’s survey serves as a “timely reminder for policymakers as they consider how to implement standards for responsible and safe AI.”
U.S. policymakers have been moving slowly to develop potential guidelines around AI. In October, a bipartisan group of senators released a draft of the NO FAKES Act, which aims to prevent the creation of “digital replicas” of an artist’s image, voice, or visual likeness without permission.
“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Senator Chris Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”
When Dierks Bentley’s band is looking for something to keep it occupied during long bus rides, the group has, at times, turned to artificial intelligence apps, asking them to create album reviews or cover art for the group’s alter ego, The Hot Country Knights.
“So far,” guitarist Charlie Worsham says, “AI does not completely understand The Hot Country Knights.”
By the same token, Music Row doesn’t completely understand AI, but the developing technology is here, inspiring tech heads and early adaptors to experiment with it, using it to get a feel, for example, for how Bentley’s voice might fit a new song or to kick-start a verse that has the writer stumped. But it has also inspired a palpable amount of fear among artists anticipating their voices will be misused and among musicians who feel they’ll be completely replaced.
“As a songwriter, I see the benefit that you don’t have to shell out a ton of money for a demo singer,” one attendee said during the Q&A section of an ASCAP panel about AI on Nov. 7. “But also, as a demo singer, I’m like, ‘Oh, shit, I’m out of a job.’”
That particular panel, moderated by songwriter-producer Chris DeStefano (“At the End of a Bar,” “That’s My Kind of Night”), was one of three AI presentations that ASCAP hosted at Nashville’s Twelve Thirty Club the morning after the ASCAP Country Music Awards, hoping to educate Music City about the burgeoning technology. The event addressed the creative possibilities ahead, the evolving legal discussion around AI and the ethical questions that it raises. (ASCAP has endorsed six principles for AI frameworks here).
The best-known examples of AI’s entry into music have revolved around the use of public figures’ voices in novel ways. Hip-hop artist Drake, in one prominent instance, had his voice re-created in a cover of “Bubbly,” originated by Colbie Caillat, who released her first country album, Along the Way, on Sept. 22.
“Definitely bizarre,” Caillat said during CMA Week activities. “I don’t think it’s good. I think it makes it all too easy.”
But ASCAP panelists outlined numerous ways AI can be employed for positive uses without misappropriating someone’s voice. DeStefano uses AI program Isotope, which learned his mixing tendencies, to elevate his tracks to “another level.” Independent hip-hop artist Curtiss King has used AI to handle tasks outside of his wheelhouse that he can’t afford to outsource, such as graphic design or developing video ideas for social media. Singer-songwriter Anna Vaus instructed AI to create a 30-day social media campaign for her song “Halloween on Christmas Eve” and has used it to adjust her bio or press releases — “stuff,” she says, “that is not what sets my soul on fire.” It allows her more time, she said, for “sitting in my room and sharing my human experiences.”
All of this forward motion is happening faster in some other genres than it is in country, and the abuses — the unauthorized use of Drake’s voice or Tom Cruise’s image — have entertainment lawyers and the Copyright Office playing catch-up. Those examples test the application of the fair use doctrine in copyright law, which allows creators to play with existing copyrights. But as Sheppard Mullin partner Dan Schnapp pointed out during the ASCAP legal panel, fair use requires the new piece to be a transformative product that does not damage the market for the original work. When Drake’s voice is being applied without his consent to a song he has never recorded and he is not receiving a royalty, that arguably affects his marketability.
The Copyright Office has declined to offer copyright protection for AI creations, though works that are formed through a combination of human and artificial efforts complicate the rule. U.S. Copyright Office deputy general counsel Emily Chapuis pointed to a comic book composed by a human author who engaged AI for the drawings. Copyright was granted to the text, but not the illustrations.
The legal community is also sorting through rights to privacy and so-called “moral rights,” the originator’s ability to control how a copyright is used.
“You can’t wait for the law to catch up to the tech,” Schnapp said during the legal panel. “It never has and never will. And now, this is the most disruptive technology that’s hit the creative industry, generally, in our lifetime. And it’s growing exponentially.”
Which has some creators uneasy. Carolyn Dawn Johnson asked from the audience if composers should stop using their phones during writing appointments because ads can track typed and spoken activity, thus opening the possibility that AI begins to draw on content that has never been included in copyrighted material. The question was not fully answered.
But elsewhere, Nashville musicians are beginning to use AI in multiple ways. Restless Road has had AI apply harmonies to songwriter demos to see if a song might fit its sound. Elvie Shane, toying with a chatbot, developed an idea that he turned into a song about the meth epidemic, “Appalachian Alchemy.” Chase Matthew’s producer put a version of his voice on a song to convince him to record it. Better Than Ezra’s Kevin Griffin, who co-wrote Sugarland’s “Stuck Like Glue,” has asked AI to suggest second verses on songs he was writing — the verses are usually pedestrian, but he has found “one nugget” that helped finish a piece.
The skeptics have legitimate points, but skeptics also protested electronic instruments, drum machines, CDs, file sharing and programmed tracks. The industry has inevitably adapted to those technologies. And while AI is scary, early adopters seem to think it’s making them more productive and more creative.
“It’s always one step behind,” noted King. “It can make predictions based upon the habits that I’ve had, but there’s so many interactions that I have because I’m a creative and I get creative about where I’m going next … If anything, AI has given me like a kick in the butt to be more creative than I’ve ever been before.”
Songwriter Kevin Kadish (“Whiskey Glasses,” “Soul”) put the negatives of AI into a bigger-picture perspective.
“I’m more worried about it for like people’s safety and all the scams that happen on the phone,” he said on the ASCAP red carpet. “Music is the least of our worries with AI.”
Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.
The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.
Explore
Explore
See latest videos, charts and news
See latest videos, charts and news
San Francisco-based OpenAI said in a statement late Tuesday: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”
The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.
OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.
The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.
Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal. That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.
One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.
Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.
“We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”
In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”
Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.
“We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”
Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”
Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.
“Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.
“It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people,” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.
As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”
“Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”
This is The Legal Beat, a weekly newsletter about music law from Billboard Pro, offering you a one-stop cheat sheet of big new cases, important rulings and all the fun stuff in between.
This week: Sean “Diddy” Combs is accused of rape amid an ongoing wave of music industry sexual abuse lawsuits; Shakira settles her $15 million tax evasion case on the eve of trial; UMG defeats a lawsuit filed by artists over its lucrative ownership stake in Spotify; and more.
Want to get The Legal Beat newsletter in your email inbox every Tuesday? Subscribe here for free.
THE BIG STORY: Diddy Sued As Music #MeToo Wave Continues
Following a string of abuse cases against powerful men in the music industry, Sean “Diddy” Combs was sued by R&B singer and longtime romantic partner Cassie over allegations of assault and rape — and then settled the case just a day later.
In a graphic complaint, attorneys for Cassie (full name Casandra Ventura) claimed she “endured over a decade of his violent behavior and disturbed demands,” including repeated physical beatings and forcing her to “engage in sex acts with male sex workers” while he masturbated. Near the end of their relationship, Ventura claimed that Combs “forced her into her home and raped her while she repeatedly said ‘no’ and tried to push him away.”
Combs immediately denied the allegations as “offensive and outrageous.” He claimed Cassie had spent months demanding $30 million to prevent her from writing a tell-all book, a request he had “unequivocally rejected as blatant blackmail.”
Read the full story on the lawsuit here.
Just a day after it was filed, Combs and Ventura announced that they had reached a settlement to resolve the case. Though quick settlements can happen in any type of lawsuit, it’s pretty unusual to see a case with such extensive and explosive allegations end just 24 hours after it was filed in court. “I wish Cassie and her family all the best,” Combs said in a statement. “Love.”
Both sides quickly put their spin on the settlement. A former staffer at Cassie’s law firm sent out a statement arguing that the quick resolution was “practically unheard of” and suggesting it showed the “evidence against Mr. Combs was overwhelming.” Combs’ lawyer, Ben Brafman, put out his own statement reiterating that a settlement — “especially in 2023” — was “in no way an admission of wrongdoing.”
Read the full story on the settlement here.
The case against Combs is the most explosive sign yet that, six years after the start of the #MeToo movement, the music industry is currently experiencing something of a second iteration.
Sexual assault lawsuits were filed earlier this month against both former Recording Academy president/CEO Neil Portnow and label exec Antonio “L.A.” Reid, and in October longtime publishing exec Kenny MacPherson was sued for sexual harassment. Before that, sexual misconduct allegations were leveled at late Atlantic Records co-founder Ahmet Ertegun; Backstreet Boys member Nick Carter; singer Jason Derulo; and ex-Kobalt exec Sam Taylor.
Many of the recent cases have been filed under New York’s Adult Survivors Act, a statute that created a limited window for alleged survivors to take legal action over years-old accusations that would typically be barred under the statute of limitations. With that look-back period set to end on Thursday (Nov. 23), more cases could be coming in the next few days. Stay tuned…
Other top stories this week…
UMG WINS CASE OVER SPOTIFY STAKE – A federal judge dismissed a class action against Universal Music Group that challenged the fairness of its 2008 purchase of shares in Spotify. The case, filed by ’90s hip-hop duo Black Sheep, accused the company of taking lower-than-market royalty rates in return for a chunk of equity that’s now worth hundreds of millions. But the judge ruled that such a maneuver — even if proven true — wouldn’t have violated UMG’s contract with its artists.
A$AP ROCKY TO STAND TRIAL – A Los Angeles judge ruled that there was enough evidence for A$AP Rocky to stand trial on felony charges that he fired a gun at a former friend and collaborator outside a Hollywood hotel in 2021. The 35-year-old hip-hop star’s lawyer vowed that “Rocky is going to be vindicated when all this is said and done, without question.”
SHAKIRA SETTLES TAX CASE – The Columbian superstar agreed to a deal with Spanish authorities to settle her $15 million criminal tax fraud case that could have resulted in a significant prison sentence for the singer. After maintaining her innocence for five years, Shakira settled on the first day of a closely-watched trial: “I need to move past the stress and emotional toll of the last several years and focus on the things I love,” she said.
ROD WAVE MERCH CRACKDOWN – The rapper won a federal court order empowering law enforcement to seize bootleg merchandise sold outside his Charlotte, N.C., concert, regardless of who was selling it. He’s the latest artist to file such a case to protect ever-more-valuable merch revenue following Metallica, SZA, Post Malone and many others.
MF DOOM NOTEBOOK BATTLE – Attorneys for Eothen “Egon” Alapatt fired back at a lawsuit that claims he stole dozens of private notebooks belonging to the late hip-hop legend MF Doom, calling the case “baseless and libelous” and telling his side of the disputed story.
“THE DAMAGE WILL BE DONE” – Universal Music Group asked for a preliminary injunction that would immediately block artificial intelligence company Anthropic PBC from using copyrighted music to train future AI models while their high-profile case plays out in court.
DIDDY TEQUILA CASE – In a separate legal battle involving Diddy, a New York appeals court hit pause on his lawsuit against alcohol giant Diageo that accused the company of racism and failing to adequately support his DeLeon brand of tequila. The court stayed the case while Diageo appeals a key ruling about how the dispute should proceed.
Microsoft snapped up Sam Altman and another architect of OpenAI for a new venture after their sudden departures shocked the artificial intelligence world, leaving the newly installed CEO of the ChatGPT maker to paper over tensions by vowing to investigate Altman’s firing.
The developments Monday come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.
It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.
Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella said they are committed to their partnership.
Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.
In a reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”
OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.
In an X post Monday, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.
“It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.
He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.
After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety.”
OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities.
An OpenAI spokeswoman didn’t immediately reply to an email Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.
After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”
Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.
It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among several employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.
Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”
His beliefs on the future of AI came up on a podcast in June. Shear said he’s generally an optimist about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans.
It’s an issue that Altman consistently faced since he helped catapult ChatGPT to global fame. In the past year, he has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.
He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.
Altman posted Friday on X that “i loved my time at openai” and later called his ouster a “weird experience.”
“If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.
Microsoft is now in an even stronger position on AI, Ives said. Its shares rose nearly 2% before the opening bell and were nearing an all-time high Monday.
The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.
Universal Music Group (UMG) wants a federal judge to immediately block artificial intelligence company Anthropic PBC from using copyrighted music to train future AI models, warning that the “damage will be done” by the time the case is over.
A month after UMG sued Anthropic for infringement over its use of copyrighted music to train its AI models, the music giant on Thursday demanded a preliminary injunction that will prohibit the AI firm from continuing to use its songs while the case plays out in court.
The music giant warned that denying its request would allow Anthropic “to continue using the Works as inputs, this time to train a more-powerful Claude, magnifying the already-massive harm to Publishers and songwriters.”
“Anthropic must not be allowed to flout copyright law,” UMG’s lawyers wrote. “If the Court waits until this litigation ends to address what is already clear—that Anthropic is improperly using Publishers’ copyrighted works—then the damage will be done.”
“Anthropic has already usurped Publishers’ and songwriters’ control over the use of their works, denied them credit, and jeopardized their reputations,” the company wrote. “If unchecked, Anthropic’s wanton copying will also irreversibly harm the licensing market for lyrics, Publishers’ relationships with licensees, and their goodwill with the songwriters they represent.”
UMG filed its lawsuit Oct 18, marking the first major case in what is expected to be a key legal battle over the future of AI music. Joined by Concord Music Group, ABKCO and other music companies, UMG claims that Anthropic – valued at $4.1 billion earlier this year — is violating copyrights en masse by using songs without authorization to teach its AI models learn how to spit out new lyrics.
“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” lawyers for the music companies wrote. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”
AI models like the popular ChatGPT are “trained” to produce new content by feeding them vast quantities of existing works known as “inputs.” Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities. Content owners in many sectors – including book authors, comedians and visual artists – have all filed similar lawsuits over training.
Anthropic and other AI firms believe that such training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law. In a filing at the Copyright Office last month, Anthropic previewed how it might make such argument in UMG’s lawsuit.
“The copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,” the company wrote in that filing. “This sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.”
But in Thursday’s motion for the injunction, UMG and the music companies sharply disputed such a notion, saying plainly: “Anthropic’s infringement is not fair use”
“Anthropic … may argue that generative AI companies can facilitate immense value to society and should be excused from complying with copyright law to foster their rapid growth,” UMG wrote. “Undisputedly, Anthropic will be a more valuable company if it can avoid paying for the content on which it admittedly relies, but that should hardly compel the Court to provide it a get-out-of-jail-free card for its wholesale theft of copyrighted content.”
A spokesperson for Anthropic did not immediately return a request for comment on Friday.
CreateSafe, a music technology studio known best for its work on Grimes’ AI voice model, has raised $4.6 million in seed round funding for its new AI music creation toolkit, TRINITI.
Offering a “full creative stack” for musicians from the inception of songwriting to its release, TRINITI’s round was led by Polychain Capital, a cryptocurrency and blockchain tech investment firm, as well as Crush Ventures, Anthony Saleh (manager of Kendrick Lamar, Nas and Gunna), Paris Hilton’s 11:11 Media, MoonPay, Chaac Ventures, Unified Music Group and Dan Weisman (vp at Bernstein Private Wealth Management).
Grimes has also joined CreateSafe’s advisory board to continue to collaborate with the brand.
Starting today, TRINITI will offer five tools:
Voice transformation and cloning: make your own voice model and offer it up for licensing, transform your voice into someone else’s
Sample Generation: create audio samples from text-based prompts
Chat: ask questions to a chat bot trained on music industry knowledge
Distribution: share music on streaming services
Management: manage rights to songs and records
“Music is the core of humankind,” said CreateSafe founder/CEO Daouda Leonard. “However, the story of music as a profession has been corrupted by middle men, who have misguided the industry while taking money from artists. For a few years, we’ve been saying that we are building the operating system for the new music business. With AI, it’s possible to fulfill that promise. We want to pioneer the age of exponential creativity and give power back to creators. With TRINITI, you can turn inspiration into a song and set of visuals. That music gets distributed to DSPs, a marketing plan can be generated, and all of the business on the backend can be easily managed. This whole process takes seconds.”
“As a team we’d always discussed finding novel ways of wealth redistribution via art,” added Grimes. “We immediately hopped onto blockchain tech because of the new possibilities for distribution, cutting out middle men, etc. Throwing generative music into the picture and removing all our label strings so we can reward derivative music — combined with everything we’d been working towards the last few years with blockchain — allowed a unique approach to distribution.
“I’m really proud of the team that they were able to execute this so fast and with such vision,” Grimes continued. “There’s a lot to talk about but ultimately, art generates so much money as an industry and artists see so little of it. A lot of people talk about abundance as one of the main end goals of tech, acceleration, AI, etc… for us the first step is actually figuring out how to remove friction from the process of getting resources into artists’ hands.”
Robert Kyncl, CEO of Warner Music Group, praised YouTube’s AI-powered voice generation experiment, which launched this week with the participation of several Warner acts, including Charlie Puth and Charli XCX, during a call with financial analysts on Thursday (Nov. 16).
Kyncl proposed a thought experiment: “Imagine in the early 2000s, if the file-sharing companies came to the music industry, and said, ‘would you like to experiment with this new tool that we built and see how it impacts the industry and how we can work together?’ It would have been incredible.”
While it’s hard to imagine the tech-averse music industry of the early 2000s would’ve jumped at this opportunity, Kyncl described the YouTube’s effort as “the first time that a large platform at a massive scale that has new tools at its disposal is proactively reaching out to its [music] partners to test and learn.” “I just want to underscore the significance of this kind of engagement,” he added. (He used to work as chief business office at YouTube.)
For the benefit of analysts, Kyncl also outlined the company’s three-pronged approach to managing the rapid emergence of AI-powered technologies. First, he said it was important to pay attention to “generative AI engines,” ensuring that they are “licensing content for training” models, “keeping records of inputs so that provenance can be tracked,” and using a “watermarking” system so that outputs can be tracked.
The next area of focus for Warner: The platforms — Spotify, TikTok, YouTube, Instagram, and more — where, as Kyncl put it, “most of the content… will end up because people who are creating want views or streams.” To manage the proliferation of AI-generated music on these services, Kyncl hoped to build on the blueprint the music industry has developed around monitoring and monetizing user-generated content, especially on YouTube, and “write the fine print for the AI age.”
Last but certainly not least, Kyncl said he was meeting with both politicians and regulators “to make sure that regulation around AI respects the creative industries.” He suggested two key goals in this arena: That “licensing for training [AI models] is required,” and that “name, image, likeness, and voice is afforded the same protection as copyright.”