State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am

Current show
blank

State Champ Radio Mix

8:00 pm 12:00 am


tech

Page: 49

Most conversations around AI in music are focused on music creation, protecting artists and rightsholders, and differentiating human-made music from machine-made works. And there is still discourse to be had as AI has some hidden superpowers waiting to be explored. One use for the technology that has immense potential to positively impact artists is music marketing.

As generative and complementary AI is becoming a larger part of creative works in music, marketing will play a larger role than ever before. Music marketing isn’t just about reaching new and existing fans and promoting upcoming singles. Today, music marketing must establish an artist’s ownership of their work and ensure that the human creatives involved are known, recognized, and appreciated. We’re about to see the golden age of automation for artists who want to make these connections and gain this appreciation.

While marketing is a prerequisite to a creator’s success, it takes a lot of time, energy, and resources. Creating engaging content takes time. According to Linktree’s 2023 Creator Report, 48% of creators who make $100-500k per year spend more than 10 hours on content creation every week. On top of that, three out of four creators want to diversify what they create but feel pressure to keep making what is rewarded by the algorithm. Rather than fighting the impossible battle of constantly evolving and cranking out more content to match what the algorithm is boosting this week, creatives can have a much greater impact by focusing on their brand and making high-quality content for their audience.

For indie artists without support from labels and dedicated promotion teams, the constant pressure to push their new single on TikTok, post on Instagram, and engage with fans while finding the time to make new music is overwhelming. The pressure is only building, thanks to changes in streaming payouts. Indie artists need to reach escape velocity faster.

Megh Vakharia

AI-powered music marketing can lighten that lift–generating campaign templates and delivering to artists the data they need to reach their intended audience. AI can take the data that artists and creators generate and put it to work in a meaningful way, automatically extracting insights from the information and analytics to build marketing campaigns and map out tactics that get results. 

AI-driven campaigns can give creators back the time they need to do what they do best: create. While artificial intelligence saves artists time and generates actionable solutions for music promotion, it is still highly dependent on the artist’s input and human touch. Just as a flight captain has to set route information and parameters before switching on autopilot, an artist enters their content, ideas, intended audience, and hopeful outcome of the marketing campaign. Then, using this information, the AI-powered marketing platform can provide all of the data and suggestions necessary to produce the targeted results.  

Rather than taking over the creative process, AI should be used to assist and empower artists to be more creative. It can help put the joy back into what can be a truly fun process — finding, reaching, and engaging with fans. 

A large portion of artists who have tapped into AI marketing have never spent money on marketing before, but with the help of these emerging tools, planning and executing effective campaigns is more approachable and intuitive. As the music industry learns more about artificial intelligence and debates its ethical implications in music creation, equal thought must be given to the opportunities that it unlocks for artists to grow their fanbases, fuel more sustainable careers, and promote their human-made work.

Megh Vakharia is the co-founder and CEO of SymphonyOS, the AI-powered marketing platform empowering creatives to build successful marketing campaigns that generate fan growth using its suite of smart, automated marketing tools.

Earlier this month, 760 stations owned by iHeartMedia simultaneously threw their weight behind a new single: The Beatles’ “Now and Then.” This was surprising, because the group broke up in 1970 and two of the members are dead. “Now and Then” began decades ago as a home recording by John Lennon; more recently, AI-powered audio technology allowed for the separation of the demo’s audio components — isolating the voice and the piano — which in turn enabled the living Beatles to construct a whole track around them and roll it out to great fanfare. 

“For three days, if you were a follower of popular culture, all you heard about was The Beatles,” says Arron Saxe, who represents several estates, including Otis Redding’s and Bill Withers’s. “And that’s great for the business of the estate of John Lennon and the estate of George Harrison and the current status of the two living legends.”

For many people, 2023 has been the year that artificial intelligence technology left the realm of science fiction and crashed rudely into daily life. And while AI-powered tools have the potential to impact wide swathes of the music industry, they are especially intriguing for those who manage estates or the catalogs of dead artists. 

That’s because there are inherent constraints involved with this work: No one is around to make new stuff. But as AI models get better, they have the capacity to knit old materials together into something that can credibly pass as new — a reproduction of a star’s voice, for example. “As AI develops, it may impact the value of an estate, depending on what assets are already in the estate and can be worked with,” says Natalia Nataskin, chief content officer for Primary Wave, who estimates that she and her team probably spend around 25% of their time per week mulling AI (time she says they used to spend contemplating possibilities for NFTs).

And a crucial part of an estate manager’s job, Saxe notes, is “looking for opportunities to earn revenue.” “Especially with my clients who aren’t here,” he adds, “you’re trying to figure out, how do you keep it going forward?”

The answer, according to half a dozen executives who work with estates or catalogs of dead artists or songwriters, is “very carefully.” “We say no to 99 percent of opportunities,” Saxe says. 

“You have this legacy that is very valuable, and once you start screwing with it, you open yourself up to causing some real damage,” adds Jeff Jampol, who handles the estates of The Doors, Janis Joplin and more. “Every time you’re going to do something, you have to be really protective. It’s hard to be on the bleeding edge.”

To work through these complicated issues, WME went so far as to establish an AI Task Force where agents from every division educate themselves on different platforms and tools to “get a sense for what is out there and where there are advantages to bring to our clients,” says Chris Jacquemin, the company’s head of digital strategy. The task force also works with WME’s legal department to gain “some clarity around the types of protections we need to be thinking about,” he continues,  as well as with the agency’s legislative division in Washington, D.C. 

At the moment, Jampol sees two potentially intriguing uses of AI in his work. “It would be very interesting to have, for instance, Jim Morrison narrate his own documentary,” he explains. He could also imagine using an AI voice model to read Morrison’s unrecorded poetry. (The Doors singer did record some poems during his lifetime, suggesting he was comfortable with this activity.) 

On Nov. 15, Warner Music Group announced a potentially similar initiative, partnering with the French great Edith Piaf’s estate to create a voice model — based on the singer’s old interviews — which will narrate the animated film Edith. The executors of Piaf’s estate, Catherine Glavas and Christie Laume, said in a statement that “it’s been a special and touching experience to be able to hear Edith’s voice once again — the technology has made it feel like we were back in the room with her.”

The use of AI tech to recreate a star’s speaking voice is “easier” than attempting to put together an AI model that will replicate a star singing, according to Nataskin. “We can train a model on only the assets that we own — on the speaking voice from film clips, for example,” she explains. 

In contrast, to train an AI model to sing like a star of old, the model needs to ingest a number of the artist’s recordings. That requires the consent of other rights holders — the owners of those recordings, which may or may not be the estate, as well as anyone involved in their composition. Many who spoke to Billboard for this story said they were leery of AI making new songs in the name of bygone legends. “To take a new creation and say that it came from someone who isn’t around to approve it, that seems to me like quite a stretch,” says Mary Megan Peer, CEO of the publisher peermusic. 

Outside the United States, however, the appetite for this kind of experimentation may differ. Roughly a year ago, the Chinese company Tencent Music Entertainment told analysts that it used AI-powered technology to create new vocal tracks from dead singers, one of which went on to earn more than 100 million streams.

For now, at least, Nataskin characterized Primary Wave as focused on “enhancing” with AI tech, “rather than creating something from scratch.” And after Paul McCartney initially mentioned that artificial intelligence played a role in “Now and Then,” he quickly clarified on X that “nothing has been artificially or synthetically created,” suggesting there is still some stigma around the use of AI to generate new vocals from dead icons. The tech just “cleaned up some existing recordings,” McCartney noted.

This kind of AI use for “enhancing” and “cleaning up,” tweaking and adjusting has already been happening regularly for several years. “For all of the industry freakout about AI, there’s actually all these ways that it’s already operating everyday on behalf of artists or labels that isn’t controversial,” says Jessica Powell, co-founder and CEO of Audioshake, a company that uses AI-powered technology for stem separation. “It can be pretty transformational to be able to open up back catalog for new uses.”

The publishing company peermusic used AI-powered stem separation to create instrumentals for two tracks in its catalog — Gaby Moreno’s “Fronteras” and Rafael Solano’s “Por Amor” — which could then be placed in ads for Oreo and Don Julio, respectively. Much like the Beatles, Łukasz Wojciechowski, co-founder of Astigmatic Records, used stem separation to isolate, and then remove distortion from, the trumpet part in a previously unreleased recording he found of jazz musician Tomasz Stanko. After the clean up, the music could be released for the first time. “I’m seeing a lot of instances with older music where the quality is really poor, and you can restore it,” Wojciechowski says.

Powell acknowledges that these uses are “not a wild proposition like, ‘create a new voice for artist X!’” Those have been few and far between — at least the authorized ones. (Hip-hop fans have been using AI-powered technology to turn snippets of rap leaks from artists like Juice WRLD, who died in 2019, into “finished” songs.) For now, Saxe believes “there hasn’t been that thing where people can look at it and go, ‘They nailed that use of it.’ We haven’t had that breakout commercial popular culture moment.”

It’s still early, though. “Where we go with things like Peter Tosh or Waylon Jennings or Eartha Kitt, we haven’t decided yet,” says Phil Sandhaus, head of WME Legends division. “Do we want to use voice cloning technologies out there to create new works and have Eartha Kitt in her unique voice sing a brand new song she’s never sung before? Who knows? Every family, every estate is different.”

Additional reporting by Melinda Newman

Full-service music company ONErpm is filling out further with the launch of two divisions, one being a new administration system meant to simplify managing an artist’s day-to-day needs — and the other an updated distribution platform geared for budget-crunched DIYers. Explore Explore See latest videos, charts and news See latest videos, charts and news The […]

Listeners remain wary of artificial intelligence, according to Engaging with Music 2023, a forthcoming report from the International Federation of the Phonographic Industry (IFPI) that seems aimed in particular at government regulators.
The IFPI surveyed 43,000 people across 26 countries, coming to the conclusion that 76% of respondents “feel that an artist’s music or vocals should not be used or ingested by AI without permission,” and 74% believe “AI should not be used to clone or impersonate artists without authorisation.” 

The results are not surprising. Most listeners probably weren’t thinking much, if at all, about AI and its potential impacts on music before 2023. (Some still aren’t thinking about it: 89% of those surveyed said they were “aware of AI,” leaving 11% who have somehow managed to avoid a massive amount of press coverage this year.) New technologies are often treated with caution outside the tech industry. 

It’s also easy for survey respondents to support statements about getting authorization for something before doing it — that generally seems like the right thing to do. But historically, artists haven’t always been interested in preemptively obtaining permission. 

Take the act of sampling another song to create a new composition. Many listeners would presumably agree that artists should go through the process of clearing a sample before using it. In reality, however, many artists sample first and clear later, sometimes only if they are forced to.

In a statement, Frances Moore, IFPI’s CEO, said that the organization’s survey serves as a “timely reminder for policymakers as they consider how to implement standards for responsible and safe AI.”

U.S. policymakers have been moving slowly to develop potential guidelines around AI. In October, a bipartisan group of senators released a draft of the NO FAKES Act, which aims to prevent the creation of “digital replicas” of an artist’s image, voice, or visual likeness without permission.

“Generative AI has opened doors to exciting new artistic possibilities, but it also presents unique challenges that make it easier than ever to use someone’s voice, image, or likeness without their consent,” Senator Chris Coons said in a statement. “Creators around the nation are calling on Congress to lay out clear policies regulating the use and impact of generative AI.”

HipHopWired Featured Video

CLOSE

Source: SOPA Images / Getty / Grand Theft Auto 6
As we get closer to the big reveal of Grand Theft Auto 6, the hype for the highly-anticipated video game is so thick you can cut it with a knife.
GTA VI is trending on X, formerly known as Twitter, but this time around, the energy is much different. Unlike the other times the long-awaited sequel to GTA V trended, there is actual excitement now because a trailer for the game is on the horizon.
Rockstar Games confirmed a trailer is on the way following the equivalent of  Woj Bomb in the video game space due to Bloomberg’s Jason Scheirer coming through with slam dunk reporting claiming the studio was dropping a trailer for the game.
In a post shared on X, formerly known as Twitter, Rockstar Games president Sam Houser wrote:
“We are very excited to let you know that in early December, we will release the first trailer for the next Grand Theft Auto. We look forward to many more years of sharing these experiences with all of you.”

Gamers React To GTA VI’s Trailer Coming In December
With that in mind and Thanksgiving and Black Friday now behind us, gamers have taken to Elon Musk’s trash platform to express how excited they are about GTA VI’s announcement being on the horizon.
“The only thing on my mind right now is #GTAVI December hurry up!!! Its gonna be game of the generation bruh….,” one post on X read. 
Another post read, “I hope we get that #GTAVI trailer Friday but I’m expecting nothing until next week at least. Regardless, I can’t believe we’ve made it this close.”
We feel both posts and wonder when the trailer will drop in December.
There is a strong chance it could happen during the upcoming Game Awards. 
We shall see.
Until then, hit the gallery below for more reactions.

Photo: SOPA Images / Getty

2. Howling

3. The graphics will be mind-blowing

4. Not confirmed, but if true, this will be crazy.

Most people send files to collaborators without a second thought — open a new email or text, click attach, hit send. Benjamin Thomas is not most people. “What I normally do is I encrypt it and send it with a 20 character randomized password via email,” he says. “And then I do a verbal confirmation of who the file is going to and deliver them the password through another method.”

Thomas is not employed by the National Security Agency; he’s an engineer who works closely with the rapper Lil Uzi Vert. But since Lil Uzi Vert’s most passionate fans are not fond of waiting for him to release music at his own pace — they want to hear it now, and will happily consume leaked songs in whatever form they can find them — these sort of safeguards are necessary.

Thomas’ precautions have helped drastically reduce the frequency of leaks. “He’s got a real blueprint for engineers to keep sh– under lock and key,” says Jason Berger, a partner at Lewis Brisbois, where he represents a number of producers who frequently collaborate with Lil Uzi Vert. “Uzi’s stuff used to leak a lot,” the attorney notes. “From about March of 2020 until the Pink Tape dropped [in June, 2023], not one f—ing record leaked out.”

But leaks remain a fairly regular occurrence in the music industry, especially in hip-hop, and happen in myriad ways. Despite being digital natives, the younger generation which tends to drive music is susceptible to being swindled: The Federal Trade Commission reported last year that 44% of people ages 20 to 29 said they lost money to online fraud, compared to 20% of people ages 70 to 79. When it comes to music, a lot of leaks boil down to “doing dumb stuff on the internet,” as Thomas puts it. 

He puts leak into two different categories: Some stem from carelessness, others from hacking. A lot of the careless leaks are the result of common email phishing techniques. 

The producer Warren “Oak” Felder recently received an email from his assistant — or so he thought. “He was asking me for something that made sense: ‘Hey, I need the bounces for some records because management asked,’” Felder says. But one sentence in the email stuck out for its odd construction, so the producer texted his assistant, who confirmed he didn’t send the email. “The amount of fake emails I get is crazy,” Thomas adds.  

The same thing also happens by text. “Friends of mine in the industry have fallen victim to people phishing, where they’ll get a text from somebody that is acting like somebody else, maybe an artist, saying they had to get a new phone, for example,” says Anthony Cruz, an engineer who works closely with Meek Mill. Maybe they ask for a demo, or maybe “they’ll send you a link, and that file ends up hacking your entire account,” prying loose any closely guarded tracks. 

Obtaining leaks via hacks can be more sophisticated, like the technique known as  SIM-swapping. “They’ll first find as much information as they can about the person that they want to hack,” explains the producer Waves, who has an unreleased song with Juice WRLD that’s floating around online due to a leak. “Then they would call your cell phone provider, say, ‘Hey, I lost my SIM card, I just got another one. Can you transfer my number over to this phone?’” 

If the perpetrator has been able to glean enough personal information — ranging from troves of previously hacked passwords that exist online to things like a mother’s maiden name — they can waltz past account protections and take over the account of the target phone. “From there, they just look through your email,” Waves says. “Sometimes they’ll even find full Pro Tools sessions and they’ll sell those. Honestly, some of them are pretty good hackers.”

Even if engineers, producers and artists are vigilant about protecting their own phones and computers, that may not be enough. Studios can be surprisingly loose with valuable materials. “A year ago, one of my clients was in one of these major recording studios and all of a sudden he’s hearing a collaboration between an A-list artist and somebody else that nobody even knew happened,” says Dylan Bourne, who manages artists and producers. “He was hearing it by accident in the studio because it was just on the computer.” Thomas “heard a story about somebody sitting outside the studio who logged into the Wi-Fi from a car,” enabling him to make off with files. 

And yet another moment of vulnerability occurs when artists ask other acts for features and send over a track. “Now you’re relying on that artist and their team to protect the files,” Cruz says. “It’s out of your hands. A couple of leaks that we have been a part of have been because of that.”

Due to the danger of files ending up in the wrong hands, “a lot of artists are starting to use private servers to share music,” according to Felder. “They’re saying, ‘listen, if you send me the record, don’t text it to me, don’t email it to me.’” 

One of Bourne’s clients, a producer, recently had to determine splits on a song he worked on for an A-list artist. The artist convened a listening session on Zoom “so that they could know what it made sense to argue for, but not have access to the song in any capacity,” Bourne says. “In the past people would have sent listening links.”

Another tactic Felder has taken up is “naming things cryptically.” This way, in case someone gets into a Dropbox folder or email and roots around for demos of songs featuring notable artists, at least that person can’t easily figure who is recorded on what. 

Leaks are “not a situation that’s going to go away,” Bourne continues. “Artists who care have to get ahead of it and be more protective about the music.”

All products and services featured are independently chosen by editors. However, Billboard may receive a commission on orders placed through its retail links, and the retailer may receive certain auditable data for accounting purposes. Gamers can finally take advantage of Black Friday deals including on the popular Meta Quest 2, which is being discounted for […]

HipHopWired Featured Video

CLOSE

Source: Epic Games / Eminem x Fortnite Big Bang Event
Slim Shady is officially coming to Fortnite. Enimen and Epic Games confirmed the rapper will be a part of the upcoming “Big Bang” event in the extremely popular video game.
After a day of speculation, Fortnite and Eminem confirmed his existence in the game and will be a part of the Big Bang event to close out the highly successful Fortnite OG chapter.

Epic Games confirmed that Fortnite players will be greeted by a new Big Bang loading screen featuring the “Without Me” rapper in preparation for a virtual performance from the Detroit rapper.
Source: Epic Games / Fortnite / Eminem
Also, if you’re going to attend the event, you have to look the part, so of course, three Eminem-related skins will be available: Rap Boy, Slim Shady, and Marshall Never More, which also come with matching accessories.
The new looks, the latest edition to Fortnite’s “Icon Series,” will be available starting Wednesday, November 29, at 7 PM ET.
Source: Epic Games / Eminem x Fortnite Big Bang Event
Those who attend the Big Bang event will unlock the Marshall Magma Style for the outfit, whether you purchased the outfit before or after attending.
As expected, Eminem and Fortnite fans have been reacting to the news of the legendary rapper coming to the video game.
“imma be honest… Eminem in Fortnite will go hard,” one X user wrote. 
You can see more reactions to the news in the gallery below.

Photo: Epic Games / Fortnite Big Bang Event

1. Good question

2. Proper use of Em bars.

When Dierks Bentley’s band is looking for something to keep it occupied during long bus rides, the group has, at times, turned to artificial intelligence apps, asking them to create album reviews or cover art for the group’s alter ego, The Hot Country Knights.
“So far,” guitarist Charlie Worsham says, “AI does not completely understand The Hot Country Knights.”

By the same token, Music Row doesn’t completely understand AI, but the developing technology is here, inspiring tech heads and early adaptors to experiment with it, using it to get a feel, for example, for how Bentley’s voice might fit a new song or to kick-start a verse that has the writer stumped. But it has also inspired a palpable amount of fear among artists anticipating their voices will be misused and among musicians who feel they’ll be completely replaced.

“As a songwriter, I see the benefit that you don’t have to shell out a ton of money for a demo singer,” one attendee said during the Q&A section of an ASCAP panel about AI on Nov. 7. “But also, as a demo singer, I’m like, ‘Oh, shit, I’m out of a job.’”

That particular panel, moderated by songwriter-producer Chris DeStefano (“At the End of a Bar,” “That’s My Kind of Night”), was one of three AI presentations that ASCAP hosted at Nashville’s Twelve Thirty Club the morning after the ASCAP Country Music Awards, hoping to educate Music City about the burgeoning technology. The event addressed the creative possibilities ahead, the evolving legal discussion around AI and the ethical questions that it raises. (ASCAP has endorsed six principles for AI frameworks here).

The best-known examples of AI’s entry into music have revolved around the use of public figures’ voices in novel ways. Hip-hop artist Drake, in one prominent instance, had his voice re-created in a cover of “Bubbly,” originated by Colbie Caillat, who released her first country album, Along the Way, on Sept. 22. 

“Definitely bizarre,” Caillat said during CMA Week activities. “I don’t think it’s good. I think it makes it all too easy.”

But ASCAP panelists outlined numerous ways AI can be employed for positive uses without misappropriating someone’s voice. DeStefano uses AI program Isotope, which learned his mixing tendencies, to elevate his tracks to “another level.” Independent hip-hop artist Curtiss King has used AI to handle tasks outside of his wheelhouse that he can’t afford to outsource, such as graphic design or developing video ideas for social media. Singer-songwriter Anna Vaus instructed AI to create a 30-day social media campaign for her song “Halloween on Christmas Eve” and has used it to adjust her bio or press releases — “stuff,” she says, “that is not what sets my soul on fire.” It allows her more time, she said, for “sitting in my room and sharing my human experiences.”

All of this forward motion is happening faster in some other genres than it is in country, and the abuses — the unauthorized use of Drake’s voice or Tom Cruise’s image — have entertainment lawyers and the Copyright Office playing catch-up. Those examples test the application of the fair use doctrine in copyright law, which allows creators to play with existing copyrights. But as Sheppard Mullin partner Dan Schnapp pointed out during the ASCAP legal panel, fair use requires the new piece to be a transformative product that does not damage the market for the original work. When Drake’s voice is being applied without his consent to a song he has never recorded and he is not receiving a royalty, that arguably affects his marketability.

The Copyright Office has declined to offer copyright protection for AI creations, though works that are formed through a combination of human and artificial efforts complicate the rule. U.S. Copyright Office deputy general counsel Emily Chapuis pointed to a comic book composed by a human author who engaged AI for the drawings. Copyright was granted to the text, but not the illustrations.

The legal community is also sorting through rights to privacy and so-called “moral rights,” the originator’s ability to control how a copyright is used.

“You can’t wait for the law to catch up to the tech,” Schnapp said during the legal panel. “It never has and never will. And now, this is the most disruptive technology that’s hit the creative industry, generally, in our lifetime. And it’s growing exponentially.”

Which has some creators uneasy. Carolyn Dawn Johnson asked from the audience if composers should stop using their phones during writing appointments because ads can track typed and spoken activity, thus opening the possibility that AI begins to draw on content that has never been included in copyrighted material. The question was not fully answered.

But elsewhere, Nashville musicians are beginning to use AI in multiple ways. Restless Road has had AI apply harmonies to songwriter demos to see if a song might fit its sound. Elvie Shane, toying with a chatbot, developed an idea that he turned into a song about the meth epidemic, “Appalachian Alchemy.” Chase Matthew’s producer put a version of his voice on a song to convince him to record it. Better Than Ezra’s Kevin Griffin, who co-wrote Sugarland’s “Stuck Like Glue,” has asked AI to suggest second verses on songs he was writing — the verses are usually pedestrian, but he has found “one nugget” that helped finish a piece. 

The skeptics have legitimate points, but skeptics also protested electronic instruments, drum machines, CDs, file sharing and programmed tracks. The industry has inevitably adapted to those technologies. And while AI is scary, early adopters seem to think it’s making them more productive and more creative.

“It’s always one step behind,” noted King. “It can make predictions based upon the habits that I’ve had, but there’s so many interactions that I have because I’m a creative and I get creative about where I’m going next … If anything, AI has given me like a kick in the butt to be more creative than I’ve ever been before.”

Songwriter Kevin Kadish (“Whiskey Glasses,” “Soul”) put the negatives of AI into a bigger-picture perspective.

“I’m more worried about it for like people’s safety and all the scams that happen on the phone,” he said on the ASCAP red carpet. “Music is the least of our worries with AI.”

Subscribe to Billboard Country Update, the industry’s must-have source for news, charts, analysis and features. Sign up for free delivery every weekend.

The ousted leader of ChatGPT-maker OpenAI is returning to the company that fired him late last week, culminating a days-long power struggle that shocked the tech industry and brought attention to the conflicts around how to safely build artificial intelligence.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

San Francisco-based OpenAI said in a statement late Tuesday: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”

The board, which replaces the one that fired Altman on Friday, will be led by former Salesforce co-CEO Bret Taylor, who also chaired Twitter’s board before its takeover by Elon Musk last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo.

OpenAI’s previous board of directors, which included D’Angelo, had refused to give specific reasons for why it fired Altman, leading to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors.

The chaos also accentuated the differences between Altman — who’s become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and members of the company’s board who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.

Microsoft, which has invested billions of dollars in OpenAI and has rights to its current technology, quickly moved to hire Altman on Monday, as well as another co-founder and former president, Greg Brockman, who had quit in protest after Altman’s removal. That emboldened a threatened exodus of nearly all of the startup’s 770 employees who signed a letter calling for the board’s resignation and Altman’s return.

One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, later expressed regret and joined the call for the board’s resignation.

Microsoft in recent days had pledged to welcome all employees who wanted to follow Altman and Brockman to a new AI research unit at the software giant. Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was still open to the possibility of Altman returning to OpenAI, so long as the startup’s governance problems are solved.

“We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed, and effective governance.”

In his own post, Altman said that “with the new board and (with) Satya’s support, I’m looking forward to returning to OpenAI, and building on our strong partnership with (Microsoft).”

Co-founded by Altman as a nonprofit with a mission to safely build so-called artificial general intelligence that outperforms humans and benefits humanity, OpenAI later became a for-profit business but one still run by its nonprofit board of directors. It’s not clear yet if the board’s structure will change with its newly appointed members.

“We are collaborating to figure out the details,” OpenAI posted on X. “Thank you so much for your patience through this.”

Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, will also have a key role to play in ensuring the group “continues to thrive and build on its mission.”

Hours earlier, Brockman returned to social media as if it were business as usual, touting a feature called ChatGPT Voice that was rolling out to users.

“Give it a try — totally changes the ChatGPT experience,” Brockman wrote, flagging a post from OpenAI’s main X account that featured a demonstration of the technology and playfully winking at recent turmoil.

“It’s been a long night for the team and we’re hungry. How many 16-inch pizzas should I order for 778 people,” the person asks, using the number of people who work at OpenAI. ChatGPT’s synthetic voice responded by recommending around 195 pizzas, ensuring everyone gets three slices.

As for OpenAI’s short-lived interim CEO Emmett Shear, the second interim CEO in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result, after (tilde)72 very intense hours of work.”

“Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”