State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 12

Microsoft snapped up Sam Altman and another architect of OpenAI for a new venture after their sudden departures shocked the artificial intelligence world, leaving the newly installed CEO of the ChatGPT maker to paper over tensions by vowing to investigate Altman’s firing.

The developments Monday come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.

It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella said they are committed to their partnership.

Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.

In a reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”

OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

In an X post Monday, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

“It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.

After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety.”

OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities.

An OpenAI spokeswoman didn’t immediately reply to an email Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.

After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”

Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among several employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”

His beliefs on the future of AI came up on a podcast in June. Shear said he’s generally an optimist about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans.

It’s an issue that Altman consistently faced since he helped catapult ChatGPT to global fame. In the past year, he has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.

He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.

Altman posted Friday on X that “i loved my time at openai” and later called his ouster a “weird experience.”

“If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.

Microsoft is now in an even stronger position on AI, Ives said. Its shares rose nearly 2% before the opening bell and were nearing an all-time high Monday.

The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

Universal Music Group (UMG) wants a federal judge to immediately block artificial intelligence company Anthropic PBC from using copyrighted music to train future AI models, warning that the “damage will be done” by the time the case is over.

A month after UMG sued Anthropic for infringement over its use of copyrighted music to train its AI models, the music giant on Thursday demanded a preliminary injunction that will prohibit the AI firm from continuing to use its songs while the case plays out in court.

The music giant warned that denying its request would allow Anthropic “to continue using the Works as inputs, this time to train a more-powerful Claude, magnifying the already-massive harm to Publishers and songwriters.”

“Anthropic must not be allowed to flout copyright law,” UMG’s lawyers wrote. “If the Court waits until this litigation ends to address what is already clear—that Anthropic is improperly using Publishers’ copyrighted works—then the damage will be done.”

“Anthropic has already usurped Publishers’ and songwriters’ control over the use of their works, denied them credit, and jeopardized their reputations,” the company wrote. “If unchecked, Anthropic’s wanton copying will also irreversibly harm the licensing market for lyrics, Publishers’ relationships with licensees, and their goodwill with the songwriters they represent.”

UMG filed its lawsuit Oct 18, marking the first major case in what is expected to be a key legal battle over the future of AI music. Joined by Concord Music Group, ABKCO and other music companies, UMG claims that Anthropic – valued at $4.1 billion earlier this year — is violating copyrights en masse by using songs without authorization to teach its AI models learn how to spit out new lyrics.

“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” lawyers for the music companies wrote. “Publishers embrace innovation and recognize the great promise of AI when used ethically and responsibly. But Anthropic violates these principles on a systematic and widespread basis.”

AI models like the popular ChatGPT are “trained” to produce new content by feeding them vast quantities of existing works known as “inputs.” Whether doing so infringes the copyrights to that underlying material is something of an existential question for the booming sector, since depriving AI models of new inputs could limit their abilities. Content owners in many sectors – including book authors, comedians and visual artists – have all filed similar lawsuits over training.

Anthropic and other AI firms believe that such training is protected by copyright’s fair use doctrine — an important rule that allows people to reuse protected works without breaking the law. In a filing at the Copyright Office last month, Anthropic previewed how it might make such argument in UMG’s lawsuit.

“The copying is merely an intermediate step, extracting unprotectable elements about the entire corpus of works, in order to create new outputs,” the company wrote in that filing. “This sort of transformative use has been recognized as lawful in the past and should continue to be considered lawful in this case.”

But in Thursday’s motion for the injunction, UMG and the music companies sharply disputed such a notion, saying plainly: “Anthropic’s infringement is not fair use”

“Anthropic … may argue that generative AI companies can facilitate immense value to society and should be excused from complying with copyright law to foster their rapid growth,” UMG wrote. “Undisputedly, Anthropic will be a more valuable company if it can avoid paying for the content on which it admittedly relies, but that should hardly compel the Court to provide it a get-out-of-jail-free card for its wholesale theft of copyrighted content.”

A spokesperson for Anthropic did not immediately return a request for comment on Friday.

CreateSafe, a music technology studio known best for its work on Grimes’ AI voice model, has raised $4.6 million in seed round funding for its new AI music creation toolkit, TRINITI.

Offering a “full creative stack” for musicians from the inception of songwriting to its release, TRINITI’s round was led by Polychain Capital, a cryptocurrency and blockchain tech investment firm, as well as Crush Ventures, Anthony Saleh (manager of Kendrick Lamar, Nas and Gunna), Paris Hilton’s 11:11 Media, MoonPay, Chaac Ventures, Unified Music Group and Dan Weisman (vp at Bernstein Private Wealth Management).

Grimes has also joined CreateSafe’s advisory board to continue to collaborate with the brand.

Starting today, TRINITI will offer five tools:

Voice transformation and cloning: make your own voice model and offer it up for licensing, transform your voice into someone else’s

Sample Generation: create audio samples from text-based prompts

Chat: ask questions to a chat bot trained on music industry knowledge

Distribution: share music on streaming services

Management: manage rights to songs and records

“Music is the core of humankind,” said CreateSafe founder/CEO Daouda Leonard. “However, the story of music as a profession has been corrupted by middle men, who have misguided the industry while taking money from artists. For a few years, we’ve been saying that we are building the operating system for the new music business. With AI, it’s possible to fulfill that promise. We want to pioneer the age of exponential creativity and give power back to creators. With TRINITI, you can turn inspiration into a song and set of visuals. That music gets distributed to DSPs, a marketing plan can be generated, and all of the business on the backend can be easily managed. This whole process takes seconds.”

“As a team we’d always discussed finding novel ways of wealth redistribution via art,” added Grimes. “We immediately hopped onto blockchain tech because of the new possibilities for distribution, cutting out middle men, etc. Throwing generative music into the picture and removing all our label strings so we can reward derivative music — combined with everything we’d been working towards the last few years with blockchain — allowed a unique approach to distribution.

“I’m really proud of the team that they were able to execute this so fast and with such vision,” Grimes continued. “There’s a lot to talk about but ultimately, art generates so much money as an industry and artists see so little of it. A lot of people talk about abundance as one of the main end goals of tech, acceleration, AI, etc… for us the first step is actually figuring out how to remove friction from the process of getting resources into artists’ hands.”

Robert Kyncl, CEO of Warner Music Group, praised YouTube’s AI-powered voice generation experiment, which launched this week with the participation of several Warner acts, including Charlie Puth and Charli XCX, during a call with financial analysts on Thursday (Nov. 16).

Kyncl proposed a thought experiment: “Imagine in the early 2000s, if the file-sharing companies came to the music industry, and said, ‘would you like to experiment with this new tool that we built and see how it impacts the industry and how we can work together?’ It would have been incredible.” 

While it’s hard to imagine the tech-averse music industry of the early 2000s would’ve jumped at this opportunity, Kyncl described the YouTube’s effort as “the first time that a large platform at a massive scale that has new tools at its disposal is proactively reaching out to its [music] partners to test and learn.” “I just want to underscore the significance of this kind of engagement,” he added. (He used to work as chief business office at YouTube.)

For the benefit of analysts, Kyncl also outlined the company’s three-pronged approach to managing the rapid emergence of AI-powered technologies. First, he said it was important to pay attention to “generative AI engines,” ensuring that they are “licensing content for training” models, “keeping records of inputs so that provenance can be tracked,” and using a “watermarking” system so that outputs can be tracked.

The next area of focus for Warner: The platforms — Spotify, TikTok, YouTube, Instagram, and more — where, as Kyncl put it, “most of the content… will end up because people who are creating want views or streams.” To manage the proliferation of AI-generated music on these services, Kyncl hoped to build on the blueprint the music industry has developed around monitoring and monetizing user-generated content, especially on YouTube, and “write the fine print for the AI age.”

Last but certainly not least, Kyncl said he was meeting with both politicians and regulators “to make sure that regulation around AI respects the creative industries.” He suggested two key goals in this arena: That “licensing for training [AI models] is required,” and that “name, image, likeness, and voice is afforded the same protection as copyright.”

YouTube is launching an experimental feature Thursday (Nov. 16) that will create artificial intelligence-generated voices of well-known artists for use in clips on YouTube shorts. The initial selection of acts participating in the program includes Charlie Puth, John Legend, Sia, T-Pain, Demi Lovato, Troye Sivan, Charli XCX, Alec Benjamin and Papoose. 

YouTube’s feature, called Dream Track, creates pieces of music — voice along with musical accompaniment — based on text prompts that are up to 30 seconds in length. For now, around 100 U.S.-based creators will have Dream Track access.

“At this initial phase, the experiment is designed to help explore how the technology could be used to create deeper connections between artists and creators, and ultimately, their fans,” according to a blog post from Lyor Cohen, global head of music, and Toni Reid, vp of emerging experiences and community.

The music industry has been wary of AI this year, but several prominent executives voiced their support for Dream Track. “In this dynamic and rapidly evolving market, artists gain most when together we engage with our technology partners to work towards an environment in which responsible AI can take root and grow,” Universal Music Group chairman and CEO Lucian Grainge said in a statement. “Only with active, constructive and deep engagement can we build a mutually successful future together.”

“YouTube is taking a collaborative approach with this Beta,” Robert Kyncl, CEO of Warner Music Group, said in a statement of his own. “These artists are being offered the choice to lean in, and we’re pleased to experiment and find out what the creators come up with.” 

YouTube emphasized that Dream Track is an experiment. The artists involved are “excited to help us shape the future,” Cohen said in an interview. “Being part of this experiment allows them to do it.” That also means that, for now, some of the underlying details — how is the AI tech trained? how might this feature be monetized at scale? — remain fuzzy.

While the lawyers figure all that out, the artists involved in Dream Track sounded enthusiastic. Demi Lovato: “I am open minded and hopeful that this experiment with Google and YouTube will be a positive and enlightening experience.” John Legend: “I am happy to have a seat at the table, and I look forward to seeing what the creators dream up during this period.” Sia: “I can’t wait to hear what kinds of recipes all you creators out there come up with.” 

While YouTube’s AI-generated voices are likely to get the most attention, the platform also announced the release of new AI music tools. These build on lessons learned from the “AI Music Incubator” the platform announced in August, according to Demis Hassabis, CEO of Google Deepmind. Through that program, “some of the world’s most famous musicians have given feedback on what they would like to see, and we’ve been inspired by that to build out the technology and the tools in certain ways so that it would be useful for them,” Hassabis explained in an interview.

He ticked off a handful of examples: An artist can hum something and AI-powered technology will create an instrumental based on the tune; a songwriter can pen two musical phrases on their own and rely on the tools to help craft a transition between them; a singer can come in with a fully fledged vocal melody and ask the tech to come up with musical accompaniment.   

Finally, YouTube is rolling out another feature called SynthID, which will watermark any of the AI-generated audio it produces so it can be identified as such. Earlier this week, the platform announced that it would provide labels and others music rights holders the ability “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice.”

Moises, an AI music and audio start-up, has partnered with HYPERREAL, a visual effects company, to create a “proprietary digital human asset” called Hypermodel. This will allow artists to create their digital versions of themselves for marketing, creative and fan engagement purposes.

HYPERREAL has already been collaborating with musicians since 2021, when he worked with Paul McCartney and Beck on their music video for “Find My Way.” In the video, Beck went undercover as a younger version of 81-year-old McCartney, using HYPERREAL to swap and de-age their faces.

Moises is a popular AI music and audio company that provides a suite of tools for musicians, including stem separation, lyric transcription, and voice synthesis.

According to the press release, Moises and HYPERREAL believe this collaboration will especially help the estates of legacy artists to bring the artist’s legacy “to life” and will allow artists to sing or speak in another language using AI voice modeling provided by Moises, helping to localize songs and marketing content to specific regions.

Translations and estate or legacy artist marketing are seen as two of the most sought after new applications of AI for musicians. Last week, pop artist Lauv collaborated with AI voice start-up Hooky to translate his song “Love U Like That” into Korean as a thank you to his steadfast fanbase in the region. This is not the first time AI has been used to translate an artist’s voice — it was first employed in May by MIDNATT, a Korean artist who used the HYBE-owned voice synthesis company Supertone to translate his debut single into six languages — but Lauv’s use of the technology was the first popular Western artist to try it.

Estates are starting to leverage AI as well to essentially bring a late artist back to life. On Tuesday, Nov 14, Warner Music announced plans to use AI to recreate the voice and image of legendary “La Vie En Rose” singer, Edith Piaf, for an upcoming biopic about her life and career. Over in Korea, Supertone remade the voice of late South Korean folk artist Kim Kwang-seok, and Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui as a way to revive interest in their catalogs.

“Moises and HYPERREAL are each best-in-class players with a history of pushing creative boundaries enabled by technology while fully respecting the choices of artists and rights holders,” says Moises CEO Geraldo Ramos. “As their preferred partner, we’re looking forward to seeing the ways HYPERREAL, can leverage Moises’s voice modeling capabilities to add incredibly realistic voices to their productions.”

“We have set the industry standard and exceeded the expectations of the most demanding directors and producers time and time again,” says Remington Scott, founder and CEO of HYPERREAL. “In addition to Moises’s artist-first approach, the quality of their voice models is the best we’ve heard.”

YouTube will introduce the ability for labels and others music rights holders “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” according to a blog post published on Tuesday (Nov. 14). 

Access to the request system will initially be limited: “These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments.” However, the blog, written by vice presidents of of product management Jennifer Flannery O’Connor and Emily Moxley, noted that YouTube will “continue to expand access to additional labels and distributors over the coming months.”

This marks the latest step by YouTube to try to assuage music industry fears about new AI-powered technologies — and also position itself as a leader in the space. 

In August, YouTube published its “principles for partnering with the music industry on AI technology.” Chief among them: “it must include appropriate protections and unlock opportunities for music partners who decide to participate,” wrote CEO Neil Mohan.

YouTube also partnered with a slew of artists from Universal Music Group on an “AI music incubator.” “Artists must play a central role in helping to shape the future of this technology,” the Colombian star Juanes said in a statement at the time. “I’m looking forward to working with Google and YouTube… to assure that AI develops responsibly as a tool to empower artists.”

In September, at the annual Made on YouTube event, the company announced a new suite of AI-powered video and audio tools for creators. Creators can type in an idea for a backdrop, for example, and a new feature dubbed “Dream Screen” will generate it for them. Similarly, AI can assist creators in finding the right songs for their videos.

In addition to giving labels the ability to request the takedown of unauthorized imitations, YouTube promised on Tuesday to roll out enhanced labels so that viewers know they are interacting with content that “is synthetic”: “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” 

TikTok announced a similar feature in September. Of course, self disclosure has its limits — especially as it is already reported that many creators experiment with AI without admitting it.

According to YouTube, “creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

Warner Music has announced plans to use AI technology to recreate the voice and image of legendary French artist, Edith Piaf, in an upcoming full-length animated film. Titled EDITH, the upcoming project is developed by production company Seriously Happy and Warner Music Entertainment in partnership with the Piaf’s estate.
EDITH is set to be a 90-minute film, chronicling the life and career of the famous singer as she traveled between Paris and New York. The voice clone of Piaf will narrate the story, revealing new details about her life never before known.

The AI models used to aid EDITH’s storytelling were trained on hundreds of voice clips and images of the late French singer-songwriter to, as a press release puts it, “further enhance the authenticity and emotional impact of her story.” The story will also feature recordings of her songs “La Vie En Rose” and “Non, Je Ne Regrette Rien,” which are part of the Warner Music catalog.

The story will be told through a mix of animation and archival footage of the singer’s life, including clips of her stage and tv performances, interviews and personal archives. EDITH is the brain child of Julie Veille, who previously created other French-language music biographies like Stevie Wonder: Visionnaire et prophète, Diana Ross, suprême diva, Sting, l’électron libre. The screenplay was written by Veille and Gilles Marliac and will be developed alongside Warner Music Entertainment President, Charlie Cohen. The proof of concept has been created, and the team will soon partner with a studio to develop it into a full-length film.

This is not the first time AI voice clones have been used to aid in the storytelling of a film. Perhaps the most cited example of this was Roadrunner (2021), a documentary about the life of chef and TV host Anthony Bourdain, who passed away in 2018. AI was used to bring back Bourdain’s voice for about 45 seconds. During that time, a deepfaked Bourdain spoke a letter he wrote during his life aloud to the audience.

Visual AI and other forms of CGI have also been employed in movies in recent years to resurrect the likenesses of deceased icons, including Carrie Fisher, Harold Ramis and Paul Walker. Even James Dean, who died in 1955 after starring in only three films, is currently being recreated using AI for an upcoming film titled Back to Eden.

The EDITH project is likely just the start of estates using AI voice or likeness recreation to rejuvenate the relevance of deceased artists and grow the value of older music catalogs. Already, HYBE-owned AI voice synthesis company Supertone remade the voice of late South Korean folk artist Kim Kwang-seok, and Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui.

Veille says, “It has been the greatest privilege to work alongside Edith’s Estate to help bring her story into the 21st century. When creating the film we kept asking ourselves, ‘if Edith were still with us, what messages would she want to convey to the younger generations?’ Her story is one of incredible resilience, of overcoming struggles, and defying social norms to achieve greatness – and one that is as relevant now as it was then. Our goal is to utilize the latest advancements in animation and technology to bring the timeless story to audiences of all ages.”

Catherine Glavas and Christie Laume, executors of Edith Piaf’s estate, add, “It’s been a special and touching experience to be able to hear Edith’s voice once again – the technology has made it feel like we were back in the room with her. The animation is beautiful and through this film we’ll be able to show the real side of Edith – her joyful personality, her humor and her unwavering spirit.”

Alain Veille, CEO of Warner Music France, says, “Edith is one of France’s greatest ever artists and she is still a source of so much pride to the French people. It is such a delicate balancing act when combining new technology with heritage artists, and it was imperative to us that we worked closely with Edith’s estate and handled this project with the utmost respect. Her story is one that deserves to be told, and through this film we’ll be able to connect with a whole new audience and inspire a new generation of fans.”

Diaa El All, CEO/founder of generative artificial intelligence music company Soundful, remembers when the first artists were signed to major label deals based on songs using type beats — cheap, licensable beats available online that are labeled based on the artists the beat emulates (i.e. Drake Type Beat, XXXTentacion Type Beat). He also remembers the legal troubles that followed. “Those type beats are licensed to sometimes thousands of people at a time,” he explains. “If it becomes a hit for one artist, then that artist ends up with major problems to unravel.”

Perhaps the most famous example of this is Lil Nas X and his breakthrough smash “Old Town Road,” which was written over a $30 Future type beat that was also licensed by other DIY talents. After the song went viral in early 2019, the then-unknown rapper and meme maker quickly inked a deal with Columbia Records, but beneath the song’s mammoth success lay a tangle of legal issues to sort through. For one thing, the song’s type beat included an unauthorized sample of Nine Inch Nails’ “34 Ghosts IV,” which was not disclosed to Lil Nas X when he purchased it.

El All’s solution to these issues may seem counter-intuitive, but he posits that his AI models could provide an ethical alternative to the copyright nightmares of the type beat market.

Starting Wednesday (Nov. 8), Soundful is launching Soundful Collabs, which is partnering with artists, songwriters and producers in various genres — including Kaskade, Starrah, 3LAU, DJ White Shadow, Autograf and CB Mix — to train personalized AI generators that create beats akin to their specific production and writing styles. To create a realistic model, the artists, songwriters and producers provide Soundful with dozens of their favorite one-shot recordings of kick drums, snares, guitar licks and synth patches from their personal sonic libraries, as well as information about how they typically construct chord progressions and song structures.

The result is individualized AI models that can generate endless one-of-a-kind tracks that echo a hitmaker’s style while compensating them for the use of their name and sonic identity. For $15, a Soundful subscriber can download up to 10 tracks the generator comes up with. This includes stems so the user can add or subtract elements of the track to suit their tastes after exporting it to a digital audio workstation (DAW) of their choice. The hitmaker receives 80% of the monies earned from the collaboration while Soundful retains 20% — a split El All says was inspired by “flipping” major record labels’ common 80/20 split in favor of the artist.

The Soundful leader, who has a background as a classical pianist and sound engineer, sees this as a novel form of musical “merchandise” that offers talent an additional revenue stream and a chance at fostering further fan engagement and user-generated content (UGC). “We don’t use any loops, we don’t use any previous tracks as references,” El All says. As a result, he argues the product’s profits belong only to the talent, not their record label or publishers, given that it does not use any of their copyrights. Still, he says he’s met with “a lot of publishers” and some labels about the new product. (El All admits that an artist in a 360 deal — a contract which grants labels a cut of money from touring, merchandise and other forms of non-recorded music income — may have to share proceeds with their label.)

According to Kaskade, who has been a fan of Soundful’s since he tested the original beta product earlier this year, the process of training his model felt like “Splice on crack — this is the next evolution of the Splice sample packs,” where producers offer fans the opportunity to purchase a pack of their favorite loops and samples for a set price, he explains. “[With sample packs] you got access to the sounds, but now, you get an AI generator to help you put it all together.”

The new Soundful product is illustrative of a larger trend in AI towards personalized models. On Monday, OpenAI, the leading AI company behind ChatGPT and DALL-E, announced that it was launching “GPTs” – a new service that allows small businesses and individuals to build customized versions of ChatGPT attuned to their personal needs and interests.

This trend is also present in music AI, with many companies offering personalized models and collaborations with talent. This is especially popular on the voice synthesis side of the nascent industry. So far, start-ups like Kits AI, Voice-Swap, Hooky, CreateSafe and more are working with artists to feed recordings of their voices into AI models to create realistic clones of their voices for fans or the artists themselves to use — Grimes’ model being the most notable to date. Though much more ethically questionable, the popularity of Ghostwriter’s “Heart On My Sleeve” — which employed a voice model to emulate Drake and The Weeknd and which was not authorized by the artists — also proved the appetite for personalized music models.

Notably, Soundful’s product has the potential to be a producer and songwriter-friendly counterpart to voice models, which present possible monetary benefits (and threats) to recording artists and singers but do not pertain to the craftspeople behind the hits, who generally enjoy fewer financial opportunities than the artists they work with. As Starrah — who has written “Havana” by Camila Cabello, “Pick Up The Phone” by Young Thug and Travis Scott and “Girls Like You” by Maroon 5 — explains, Soundful Collabs are “an opportunity for songwriters and producers to expand what they are doing in so many ways.”

El All says keeping the needs of the producer and songwriter communities in mind was paramount in the creation of this product. For the first time, he reveals that longtime industry executive, producer manager and Hallwood Media founder Neil Jacobson is on Soundful’s founding team and board. El All says Jacobson’s expertise proved instrumental in steering the Soundful Collabs project in a direction that El All feels could “change the industry for the better.” “I think what Soundful provides here is similar to what I do in my own business,” says Jacobson. “I supply music to people who need it — with Soundful, a fan of one of these artists who wants to make music but doesn’t quite know how to work a digital audio workstation can get the boost they need to start creating.”

El All says the new product will extend beyond personalization for current songwriters, producers and artists. The Soundful team is also in talks with catalog owners and estates and working with a number of top brands in the culinary, consumer goods, hospitality, children’s entertainment and energy industries to train personalized models to create sonic “brand templates” and “generative catalogs” to be used in social media content. “This will help them create a very clear signature identification via sound,” says El All.

When asked if this business-to-business application takes away opportunities for synch licensing from composers, El All counters that some of these companies were using royalty free libraries prior to meeting with Soundful. “We’re actually creating new opportunities for musicians because we are consistently hiring those specializing in synch and sound designers to continue to evolve the brand’s sound,” he says.

In the future, Soundful will drop more artist templates every four to six weeks, and its Collabs will expand into genres like Latin, lo-fi, rock, pop and more. “Though this sounds good out of the box … what will make the music a hit is when a person downloads these stems and adds their own human imperfections and style to it,” says El All. “That’s what we are looking to encourage. It’s a jumping off point.”

Korean artist MIDNATT made history earlier this year by using AI to help him translate his debut single “Masquerade” into six different languages. Though it wasn’t a major commercial success, its seamless execution by the HYBE-owned voice synthesis company Supertone proved there was a new, positive application of musical AI on the horizon that went beyond unauthorized deepfakes and (often disappointing) lo-fi beats.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Enter Jordan “DJ Swivel” Young, a Grammy-winning mixing engineer and producer, known best for his work with Beyonce, BTS and Dua Lipa. His new AI voice company Hooky is one of many new start-ups trying to popularize voice cloning, but unlike much of his competition, Young is still an active and well-known collaborator for today’s musical elite. After connecting with pop star Lauv, whom he worked with briefly years before as an engineer, Young’s Hooky developed an AI voice model of Lauv’s voice so that they could translate the singer-songwriter’s new single “Love U Like That” into Korean. 

It’s the first major Western artist to take part in the AI translation trend. Lauv wants the new translated version of “Love U Like That” to be a way of showing his love to his Korean fanbase and to celebrate his biggest headline show to date, which recently took place in Seoul. 

Though many fans around the world listen to English-speaking music in high numbers, Will Page, author and former chief economist at Spotify, and Chris Dalla Riva, a musician and Audiomack employee, noted in a recent report that many international audiences are increasingly turning their interest back to their local language music – a trend they nicknamed “Glocalization.” With Hooky, Supertone and other AI voice synthesis companies all working to master translation, English-speaking artists now have the opportunity to participate in this growing movement and to form tighter bonds with international fans.

To explain the creation of “Love U Like That (Korean Version),” out Wednesday (Nov. 8), Lauv and Young spoke in an exclusive interview to Billboard. 

When did you first hear what was possible with AI voice filters?

Lauv: I think the first time was that Drake and The Weeknd song [“Heart On My Sleeve” by Ghostwriter]. I thought it was crazy. Then when my friend and I were working on my album, we started playing each other’s music. He pulled out a demo. They were pitching it to Nicki Minaj, and he was singing it and then put it into Nicki Minaj’s voice. I remember thinking it’s so insane this is possible.

Why did you want to get involved with AI voice technology yourself?

Lauv: I truly believe that the only way forward is to embrace what is possible now, no matter what. I think being able to embrace a tool like this in a way that’s beneficial and able to get artists paid is great. 

Jordan, how did you get acquainted with Lauv, and why did you feel he was the right artist to mark your first major collaboration? 

Jordan “DJ Swivel” Young: We’ve done a lot of general outreach to record companies, managers, etcetera. We met Range Media Partners, Lauv’s management team, and they really resonated with Hooky. The timing was perfect: he was wrapping up his Asian tour and had done the biggest show of his life in South Korea. Plus, he has done a few collaborations with BTS. I’ve worked on a number of BTS songs too. There was a lot of synergy between us.

Why did you choose Korean as the language that you wanted to translate a song into?

Lauv: Well, in the future, I would love to have the opportunity to do this in as many different languages as possible, but Seoul has been a place that has become really close to my heart, and it was the place of my biggest headline show to date. I just wanted to start by doing something special for those Korean fans. 

What is the process of actually translating the song? 

Young: We received the original audio files for the song “Love U Like That,” and we rewrote the song with former K-Pop idol Kevin Woo. The thing with translating lyrics or poetry is it can’t be a direct translation. You have to make culturally appropriate choices, words that flow well. So Kevin did that and we re-recorded Kevin’s voice singing the translation, then we mixed the song again exactly as the original was done to match it sonically. All the background vocals were at the correct volume and the right reverbs were used. I think we’ve done a good job of matching it. Then we used our AI voice technology to match Lauv’s voice, and we converted Kevin’s Korean version into Lauv’s voice. 

Lauv: To help them make the model of my voice, I sent over a bunch of raw vocals that were just me singing in different registers. Then I met up with him and Kevin. It was riveting to hear my voice like that. I gave a couple of notes – very minor things – after hearing the initial version of the translation, and then they went back and modified. I really trusted Jordan and Kevin on how to make this authentic and respectful to Korean culture.

Is there an art to translating lyrics?

Lauv: Totally. When I was listening back to it, that’s what struck me. There’s certain parts that are so pleasing to the ear. I still love hearing the Korean version phonetically as someone from the outside. Certain parts of Kevin’s translation, like certain rhythm schemes, hit me so much harder than hearing it in English actually.

Do you foresee that there will be more opportunities for translators as this space develops?

Young: Absolutely. I call them songwriters more than translators though, actually. They play a huge role. I used to work with Beyonce as an engineer, and I’ve watched her do a couple songs in Spanish. It required a whole new vocal producer, a new team just to pull off those songs. It’s daunting to sing something that’s not your natural language. I even did some Korean background vocals myself on a BTS song I wrote. They provided me with the phonetics, and I can say it was honestly the hardest thing I’ve ever recorded. It’s hard to sing with the right emotion when you’re focused on pronouncing things correctly. But Hooky allows the artist to perform in other languages but with all the emotion that’s expected. Sure, there’s another songwriter doing the Korean performance, but Lauv was there for the whole process. His fingerprint is on it from beginning to end. I think this is the future of how music will be consumed. 

I think this could bring more opportunities for the mixing engineers too. When Dolby Atmos came out that offered more chances for mixers, and with the translations, I think there are now even more opportunities. I think it’s empowering the songwriter, the engineer, and the artist all at once. There could even be a new opportunity created for a demo singer, if it’s different from the songwriter who translated the song. 

Would you be open to making your voice model that you used for this song available to the public to use?

Lauv: Without thinking it through too much, I think my ideal self is a very open person, and so I feel like I want to say hell yeah. If people have song ideas and want to hear my voice singing their ideas, why not? As long as it’s clear to the world which songs were written and made by me and what was written by someone else using my voice tone. As long as the backend stuff makes sense, I don’t see any reason why not.