State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


tech

Page: 42

HipHopWired Featured Video

Source: Scott Dudelson / Getty
The D.O.C. talked about how it feels to be creating new music after so many years using AI technology in a recent interview.

Iconic rapper the D.O.C. was the subject of an interview with CBS Mornings Nov. 10 on his career and the life-changing car accident that damaged his vocal cords, leading him to also share how he’s utilizing artificial intelligence to help him make new music. “Fab 5 Freddy asked me would I be interested in creating an album using AI technology,” he said to co-host Michelle Miller.
[embedded content]

The former YO! MTV Raps host and Hip-Hop cultural pioneer felt that the idea of the D.O.C. using that technology was natural. “I just felt like it was a no-brainer for somebody like him to have an opportunity to bring new music to the world,” he said during the segment. Fab 5 Freddy would then connect the rap veteran to Mikey Scholman, the CEO of AI company Suno.
Scholman said that the company is “teaching the machine what D.O.C. used to sound like.” In studio sessions in the D.O.C.’s hometown of Dallas, Texas, which were captured for the interview, he took note of how fruitful their AI technology was in creating the voice he had before his car accident using his older recordings. “When this thing happens it sounds like a real me,” he said to the Suno personnel.
Miller asked Scholman towards the end of the segment, “Aren’t there ethical issues here?” voicing the concerns that have been raised about the technology being used for dubious purposes. Scholman acknowledged those concerns, but felt that in this case, the choice was a “slam dunk.”
“Letting D.O.C. recreate the voice that has been in his head that he hasn’t been able to get out there for the last 35 years – I can’t think of a better usage of this technology than that,” he said.
The D.O.C. echoed that sentiment, seeing it as a blessing, which he expressed in a post on X, formerly Twitter, on the day the CBS Mornings interview aired. “I gotta share this just to show you how cold G.O.D is,” he began. “In light of the news today that I’m getting an opportunity to make a new album using the voice I was born with, around 3:33am will be 34 years to the day I lost it on 11/11/89. Circle of light.”

YouTube will introduce the ability for labels and others music rights holders “to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” according to a blog post published on Tuesday (Nov. 14). 

Access to the request system will initially be limited: “These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments.” However, the blog, written by vice presidents of of product management Jennifer Flannery O’Connor and Emily Moxley, noted that YouTube will “continue to expand access to additional labels and distributors over the coming months.”

This marks the latest step by YouTube to try to assuage music industry fears about new AI-powered technologies — and also position itself as a leader in the space. 

In August, YouTube published its “principles for partnering with the music industry on AI technology.” Chief among them: “it must include appropriate protections and unlock opportunities for music partners who decide to participate,” wrote CEO Neil Mohan.

YouTube also partnered with a slew of artists from Universal Music Group on an “AI music incubator.” “Artists must play a central role in helping to shape the future of this technology,” the Colombian star Juanes said in a statement at the time. “I’m looking forward to working with Google and YouTube… to assure that AI develops responsibly as a tool to empower artists.”

In September, at the annual Made on YouTube event, the company announced a new suite of AI-powered video and audio tools for creators. Creators can type in an idea for a backdrop, for example, and a new feature dubbed “Dream Screen” will generate it for them. Similarly, AI can assist creators in finding the right songs for their videos.

In addition to giving labels the ability to request the takedown of unauthorized imitations, YouTube promised on Tuesday to roll out enhanced labels so that viewers know they are interacting with content that “is synthetic”: “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.” 

TikTok announced a similar feature in September. Of course, self disclosure has its limits — especially as it is already reported that many creators experiment with AI without admitting it.

According to YouTube, “creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

Warner Music has announced plans to use AI technology to recreate the voice and image of legendary French artist, Edith Piaf, in an upcoming full-length animated film. Titled EDITH, the upcoming project is developed by production company Seriously Happy and Warner Music Entertainment in partnership with the Piaf’s estate.
EDITH is set to be a 90-minute film, chronicling the life and career of the famous singer as she traveled between Paris and New York. The voice clone of Piaf will narrate the story, revealing new details about her life never before known.

The AI models used to aid EDITH’s storytelling were trained on hundreds of voice clips and images of the late French singer-songwriter to, as a press release puts it, “further enhance the authenticity and emotional impact of her story.” The story will also feature recordings of her songs “La Vie En Rose” and “Non, Je Ne Regrette Rien,” which are part of the Warner Music catalog.

The story will be told through a mix of animation and archival footage of the singer’s life, including clips of her stage and tv performances, interviews and personal archives. EDITH is the brain child of Julie Veille, who previously created other French-language music biographies like Stevie Wonder: Visionnaire et prophète, Diana Ross, suprême diva, Sting, l’électron libre. The screenplay was written by Veille and Gilles Marliac and will be developed alongside Warner Music Entertainment President, Charlie Cohen. The proof of concept has been created, and the team will soon partner with a studio to develop it into a full-length film.

This is not the first time AI voice clones have been used to aid in the storytelling of a film. Perhaps the most cited example of this was Roadrunner (2021), a documentary about the life of chef and TV host Anthony Bourdain, who passed away in 2018. AI was used to bring back Bourdain’s voice for about 45 seconds. During that time, a deepfaked Bourdain spoke a letter he wrote during his life aloud to the audience.

Visual AI and other forms of CGI have also been employed in movies in recent years to resurrect the likenesses of deceased icons, including Carrie Fisher, Harold Ramis and Paul Walker. Even James Dean, who died in 1955 after starring in only three films, is currently being recreated using AI for an upcoming film titled Back to Eden.

The EDITH project is likely just the start of estates using AI voice or likeness recreation to rejuvenate the relevance of deceased artists and grow the value of older music catalogs. Already, HYBE-owned AI voice synthesis company Supertone remade the voice of late South Korean folk artist Kim Kwang-seok, and Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui.

Veille says, “It has been the greatest privilege to work alongside Edith’s Estate to help bring her story into the 21st century. When creating the film we kept asking ourselves, ‘if Edith were still with us, what messages would she want to convey to the younger generations?’ Her story is one of incredible resilience, of overcoming struggles, and defying social norms to achieve greatness – and one that is as relevant now as it was then. Our goal is to utilize the latest advancements in animation and technology to bring the timeless story to audiences of all ages.”

Catherine Glavas and Christie Laume, executors of Edith Piaf’s estate, add, “It’s been a special and touching experience to be able to hear Edith’s voice once again – the technology has made it feel like we were back in the room with her. The animation is beautiful and through this film we’ll be able to show the real side of Edith – her joyful personality, her humor and her unwavering spirit.”

Alain Veille, CEO of Warner Music France, says, “Edith is one of France’s greatest ever artists and she is still a source of so much pride to the French people. It is such a delicate balancing act when combining new technology with heritage artists, and it was imperative to us that we worked closely with Edith’s estate and handled this project with the utmost respect. Her story is one that deserves to be told, and through this film we’ll be able to connect with a whole new audience and inspire a new generation of fans.”

Nepal’s government in the capital of Kathmandu decided to ban the popular social media app TikTok on Monday, saying it was disrupting “social harmony” in the country, home of Mount Everest. The announcement was made following a Cabinet meeting. Foreign Minister Narayan Prakash Saud said the app would be banned immediately. “The government has decided […]

HipHopWired Featured Video

Source: NurPhoto / Getty / Sony / PS5 / PlayStation
Sony’s PS5 is on its way to being one of the company’s most successful consoles.
It was a successful quarter for Sony regarding PS5 sales. Spotted on Engadget, the company has sold 4.9 million PS5 units during its second financial quarter.

That number now brings the total of PS5 consoles sold to 46.6 million. While impressive is still short of last year’s holiday figures, Sony did manage to move 1.6 million units more than it did in the previous year.

The impressive numbers come as Sony (as well as Nintendo and Xbox) all struggled to keep up with the demand for its PS5 consoles due to global supply chain issues due to the COVID-19 pandemic.
As things slowly got back to “normal” and the shortages began to ease up, Sony was able to ramp up production on the PS5 console.
By the end of July 2023, Sony announced that more than 40 million PS5 units had been sold since the console hit shelves in November 2020.
Sony has set an ambitious target of shipping 25 million PS5 units in this financial year, but to achieve that goal, the company has to move 16.8 million more units.
While it seems like a bit of a stretch, according to Reuters reporting, Sony President Hiroki Totoki believes the goal is something his company “can attain very easily.”
The new “PS5 Slim” model due out this month could be that shot in the arm that Sony needs this holiday season to reach that goal, as the PS5 will surely be on the top of children’s and adult’s Christmas wishlists.
Sony Also Moved A Lot of Games
Hardware/consoles was not the only thing Sony talked about. The company sold 67.6 million games in the second quarter; 4.7 million were first-party titles.
The first-party numbers should also get a boost thanks to Insomniac Game’s Marvel’s Spider-Man 2, which the studio happily announced sold 5 million copies since launching exclusively on the PS5 console.

That’s quite a feat for a game, not Grand Theft Auto or Call of Duty, which launches on multiple consoles.

Photo: NurPhoto / Getty

HipHopWired Featured Video

CLOSE

Source: Nintendo / The Legend of Zelda
Move over Mario. Link is the latest Nintendo character coming to a movie theater near you.
Per Engadget, The Legend of Zelda is being adapted into a live-action film, Nintendo confirms. The film adaptation of The Legend of Zelda has been in discussion for a while, but now we have the confirmation the project is in development, and Wes Ball (The Maze Runner, Kingdom of The Planet of The Apes) will sit in the director’s chair.
It would seem all Nintendo would need to give this project the greenlight was the success of the Super Mario Bros. movie.
Nintendo’s Shigeru Miyamoto and Avi Arad will serve as producers on the film; shockingly, Sony Pictures Entertainment is co-financing the movie.
Nintendo will finance 50% of the film, while Sony will handle the theater distribution.
As he did for the Super Mario Bros. animated film, Miyamoto-San proudly announced that the film is coming on via Nintendo’s official X, formerly Twitter, account.
“This is Miyamoto. I have been working on the live-action film of The Legend of Zelda for many years now with Avi Arad-san, who has produced many mega hit films,” the Nintendo chief wrote.
Miyamoto also noted that the film is in early development and “Nintendo itself heavily involved in the production,” plus “it will take time until its completion.”

X Reacts To The Legend of Zelda Movie
With the film’s announcement, X’s video game community is reacting and already fan-casting for the movie, like choosing Euphoria’s Hunter Schafer for the iconic role of Princess Zelda.

Some folks are not too excited. Kinda Funny’s Barrett Courtney, a lifelong Zelda fan, isn’t as optimistic about the upcoming film.

The Legend of Zelda is so far away, and we will be optimistic about it. You can see more reactions to the upcoming film in the gallery below.

Photo: Nintendo / The Legend of Zelda

1. LOL, jokes

3. IGN’s greatest April fools joke

4. Howling

6. Interesting

HipHopWired Featured Video

Source: Dia Dipasupil / Getty / Snoop Dogg / Cordell Broadus
Snoop Dogg is looking to do more than give out Death Row chains since acquiring the iconic music label. He is teaming up with his son, Cordell Broadus, to help put minority creators and gamers on the map.

Spotted on HypeBeast, the iconic Hip-Hop star and his seed are working together to give minority creators and artists a platform to develop and publish on Fortnite using the company’s Unreal Editor.
Broadus made the dope announcement while speaking at AFROTECH.
Per HypeBeast:

“We’ve been creating games, and none of them has been published anything on a huge scale, but on a very amateur level,” Broadus told AFROTECH. “We’ve been around games for the last five to six years. And Snoop, he’s done mobile games. A few of them in the past five to six years in apps and stuff like that. So we’ve always had the mindset of building it on our own.”

He added, “We felt like let’s really put resources into building Death Row Games and making a home for diverse creators in the gaming ecosystem and be a part of the narrative, the storytelling of what the next game should be looking like. And I keep saying ‘show representation of the culture in these sectors,’ versus us just being the talent. We wanted to make sure that we’re part of the decision that’s being made and more importantly tell these stories from diverse creators and focus on creatives in underserved communities. 

Calvin Broadus Is Excited To Work With His Dad

“I think it’s dope for a father and a son to be working together for one, and for two he’s 51 years old. So he may be out of touch with some of these things.” He continued, “And I’m always there to keep him up to speed and translate information to him in a tone that he can understand. I think that’s more important… but just as much as I’m giving him game he’s given me the keys to make these decisions with a IP as big as his.”

Salute to both of them. We hope Death Row Gaming accomplishes its mission by opening the doors for Black and Brown creators in the video game space.

Photo: Dia Dipasupil / Getty

HipHopWired Featured Video

CLOSE

Source: SOPA Images / Getty / GTA 6
GTA 6 trends on X, formerly known as Twitter, almost every month, and usually, it’s for no good reason. Well, that finally changed last night, and gamers have a reason to be very excited.
It’s happening, it’s finally happening. After years of silence and one massive leak, Rockstar Games is ready to unveil Grand Theft Auto IV, promising to share a trailer of the highly anticipated game.
The news was first revealed via bombshell exclusive reporting from Bloomberg’s Jason Scheirer, and it was officially confirmed with a post on X from Rockstar Games, who is notoriously quiet about upcoming projects.
“We are very excited to let you know that in early December, we will release the first trailer for the next Grand Theft Auto. We look forward to many more years of sharing these experiences with all of you,” said Rockstar Games president Sam Houser.

Leading up to this long-awaited day, word on the video game streets is that GTA 6 would feature its first female protagonist and will be influenced by Bonnie and Clyde while possibly returning to Vice City, Grand Theft Auto’s fictional spin on Miami.
It’s been no secret that Rockstar Games has been working on GTA 6 while continuing to add content to Grand Theft Auto Online, which launched alongside Grand Theft Auto V, which astonishingly came out in 2013 and has lived on three console generations.
GTA V is also one of the most profitable pieces of entertainment, bringing in more cash than any film or book since.
So, it should come as no shock that Rockstar Games took its sweet time making GTA IV.
Gamers Can Finally Rest: GTA 6 Is No Longer A Rumor
As expected, the video game world has been beaming with excitement at the news, knowing they are going to the first official look at Grand Theft Auto 6.
“No more GTA 6 rumors, we can finally rest,” one person said on X. 
“The GTA 6 reveal rumours are finally over,” Tom Henderson tweeted. 
We will be sharing that news when it officially drops. Until then, you can peep more reactions in the gallery below.

Photo: SOPA Images / Getty

1. We feel you

2. The accuracy

3. Yes please

Diaa El All, CEO/founder of generative artificial intelligence music company Soundful, remembers when the first artists were signed to major label deals based on songs using type beats — cheap, licensable beats available online that are labeled based on the artists the beat emulates (i.e. Drake Type Beat, XXXTentacion Type Beat). He also remembers the legal troubles that followed. “Those type beats are licensed to sometimes thousands of people at a time,” he explains. “If it becomes a hit for one artist, then that artist ends up with major problems to unravel.”

Perhaps the most famous example of this is Lil Nas X and his breakthrough smash “Old Town Road,” which was written over a $30 Future type beat that was also licensed by other DIY talents. After the song went viral in early 2019, the then-unknown rapper and meme maker quickly inked a deal with Columbia Records, but beneath the song’s mammoth success lay a tangle of legal issues to sort through. For one thing, the song’s type beat included an unauthorized sample of Nine Inch Nails’ “34 Ghosts IV,” which was not disclosed to Lil Nas X when he purchased it.

El All’s solution to these issues may seem counter-intuitive, but he posits that his AI models could provide an ethical alternative to the copyright nightmares of the type beat market.

Starting Wednesday (Nov. 8), Soundful is launching Soundful Collabs, which is partnering with artists, songwriters and producers in various genres — including Kaskade, Starrah, 3LAU, DJ White Shadow, Autograf and CB Mix — to train personalized AI generators that create beats akin to their specific production and writing styles. To create a realistic model, the artists, songwriters and producers provide Soundful with dozens of their favorite one-shot recordings of kick drums, snares, guitar licks and synth patches from their personal sonic libraries, as well as information about how they typically construct chord progressions and song structures.

The result is individualized AI models that can generate endless one-of-a-kind tracks that echo a hitmaker’s style while compensating them for the use of their name and sonic identity. For $15, a Soundful subscriber can download up to 10 tracks the generator comes up with. This includes stems so the user can add or subtract elements of the track to suit their tastes after exporting it to a digital audio workstation (DAW) of their choice. The hitmaker receives 80% of the monies earned from the collaboration while Soundful retains 20% — a split El All says was inspired by “flipping” major record labels’ common 80/20 split in favor of the artist.

The Soundful leader, who has a background as a classical pianist and sound engineer, sees this as a novel form of musical “merchandise” that offers talent an additional revenue stream and a chance at fostering further fan engagement and user-generated content (UGC). “We don’t use any loops, we don’t use any previous tracks as references,” El All says. As a result, he argues the product’s profits belong only to the talent, not their record label or publishers, given that it does not use any of their copyrights. Still, he says he’s met with “a lot of publishers” and some labels about the new product. (El All admits that an artist in a 360 deal — a contract which grants labels a cut of money from touring, merchandise and other forms of non-recorded music income — may have to share proceeds with their label.)

According to Kaskade, who has been a fan of Soundful’s since he tested the original beta product earlier this year, the process of training his model felt like “Splice on crack — this is the next evolution of the Splice sample packs,” where producers offer fans the opportunity to purchase a pack of their favorite loops and samples for a set price, he explains. “[With sample packs] you got access to the sounds, but now, you get an AI generator to help you put it all together.”

The new Soundful product is illustrative of a larger trend in AI towards personalized models. On Monday, OpenAI, the leading AI company behind ChatGPT and DALL-E, announced that it was launching “GPTs” – a new service that allows small businesses and individuals to build customized versions of ChatGPT attuned to their personal needs and interests.

This trend is also present in music AI, with many companies offering personalized models and collaborations with talent. This is especially popular on the voice synthesis side of the nascent industry. So far, start-ups like Kits AI, Voice-Swap, Hooky, CreateSafe and more are working with artists to feed recordings of their voices into AI models to create realistic clones of their voices for fans or the artists themselves to use — Grimes’ model being the most notable to date. Though much more ethically questionable, the popularity of Ghostwriter’s “Heart On My Sleeve” — which employed a voice model to emulate Drake and The Weeknd and which was not authorized by the artists — also proved the appetite for personalized music models.

Notably, Soundful’s product has the potential to be a producer and songwriter-friendly counterpart to voice models, which present possible monetary benefits (and threats) to recording artists and singers but do not pertain to the craftspeople behind the hits, who generally enjoy fewer financial opportunities than the artists they work with. As Starrah — who has written “Havana” by Camila Cabello, “Pick Up The Phone” by Young Thug and Travis Scott and “Girls Like You” by Maroon 5 — explains, Soundful Collabs are “an opportunity for songwriters and producers to expand what they are doing in so many ways.”

El All says keeping the needs of the producer and songwriter communities in mind was paramount in the creation of this product. For the first time, he reveals that longtime industry executive, producer manager and Hallwood Media founder Neil Jacobson is on Soundful’s founding team and board. El All says Jacobson’s expertise proved instrumental in steering the Soundful Collabs project in a direction that El All feels could “change the industry for the better.” “I think what Soundful provides here is similar to what I do in my own business,” says Jacobson. “I supply music to people who need it — with Soundful, a fan of one of these artists who wants to make music but doesn’t quite know how to work a digital audio workstation can get the boost they need to start creating.”

El All says the new product will extend beyond personalization for current songwriters, producers and artists. The Soundful team is also in talks with catalog owners and estates and working with a number of top brands in the culinary, consumer goods, hospitality, children’s entertainment and energy industries to train personalized models to create sonic “brand templates” and “generative catalogs” to be used in social media content. “This will help them create a very clear signature identification via sound,” says El All.

When asked if this business-to-business application takes away opportunities for synch licensing from composers, El All counters that some of these companies were using royalty free libraries prior to meeting with Soundful. “We’re actually creating new opportunities for musicians because we are consistently hiring those specializing in synch and sound designers to continue to evolve the brand’s sound,” he says.

In the future, Soundful will drop more artist templates every four to six weeks, and its Collabs will expand into genres like Latin, lo-fi, rock, pop and more. “Though this sounds good out of the box … what will make the music a hit is when a person downloads these stems and adds their own human imperfections and style to it,” says El All. “That’s what we are looking to encourage. It’s a jumping off point.”

Korean artist MIDNATT made history earlier this year by using AI to help him translate his debut single “Masquerade” into six different languages. Though it wasn’t a major commercial success, its seamless execution by the HYBE-owned voice synthesis company Supertone proved there was a new, positive application of musical AI on the horizon that went beyond unauthorized deepfakes and (often disappointing) lo-fi beats.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Enter Jordan “DJ Swivel” Young, a Grammy-winning mixing engineer and producer, known best for his work with Beyonce, BTS and Dua Lipa. His new AI voice company Hooky is one of many new start-ups trying to popularize voice cloning, but unlike much of his competition, Young is still an active and well-known collaborator for today’s musical elite. After connecting with pop star Lauv, whom he worked with briefly years before as an engineer, Young’s Hooky developed an AI voice model of Lauv’s voice so that they could translate the singer-songwriter’s new single “Love U Like That” into Korean. 

It’s the first major Western artist to take part in the AI translation trend. Lauv wants the new translated version of “Love U Like That” to be a way of showing his love to his Korean fanbase and to celebrate his biggest headline show to date, which recently took place in Seoul. 

Though many fans around the world listen to English-speaking music in high numbers, Will Page, author and former chief economist at Spotify, and Chris Dalla Riva, a musician and Audiomack employee, noted in a recent report that many international audiences are increasingly turning their interest back to their local language music – a trend they nicknamed “Glocalization.” With Hooky, Supertone and other AI voice synthesis companies all working to master translation, English-speaking artists now have the opportunity to participate in this growing movement and to form tighter bonds with international fans.

To explain the creation of “Love U Like That (Korean Version),” out Wednesday (Nov. 8), Lauv and Young spoke in an exclusive interview to Billboard. 

When did you first hear what was possible with AI voice filters?

Lauv: I think the first time was that Drake and The Weeknd song [“Heart On My Sleeve” by Ghostwriter]. I thought it was crazy. Then when my friend and I were working on my album, we started playing each other’s music. He pulled out a demo. They were pitching it to Nicki Minaj, and he was singing it and then put it into Nicki Minaj’s voice. I remember thinking it’s so insane this is possible.

Why did you want to get involved with AI voice technology yourself?

Lauv: I truly believe that the only way forward is to embrace what is possible now, no matter what. I think being able to embrace a tool like this in a way that’s beneficial and able to get artists paid is great. 

Jordan, how did you get acquainted with Lauv, and why did you feel he was the right artist to mark your first major collaboration? 

Jordan “DJ Swivel” Young: We’ve done a lot of general outreach to record companies, managers, etcetera. We met Range Media Partners, Lauv’s management team, and they really resonated with Hooky. The timing was perfect: he was wrapping up his Asian tour and had done the biggest show of his life in South Korea. Plus, he has done a few collaborations with BTS. I’ve worked on a number of BTS songs too. There was a lot of synergy between us.

Why did you choose Korean as the language that you wanted to translate a song into?

Lauv: Well, in the future, I would love to have the opportunity to do this in as many different languages as possible, but Seoul has been a place that has become really close to my heart, and it was the place of my biggest headline show to date. I just wanted to start by doing something special for those Korean fans. 

What is the process of actually translating the song? 

Young: We received the original audio files for the song “Love U Like That,” and we rewrote the song with former K-Pop idol Kevin Woo. The thing with translating lyrics or poetry is it can’t be a direct translation. You have to make culturally appropriate choices, words that flow well. So Kevin did that and we re-recorded Kevin’s voice singing the translation, then we mixed the song again exactly as the original was done to match it sonically. All the background vocals were at the correct volume and the right reverbs were used. I think we’ve done a good job of matching it. Then we used our AI voice technology to match Lauv’s voice, and we converted Kevin’s Korean version into Lauv’s voice. 

Lauv: To help them make the model of my voice, I sent over a bunch of raw vocals that were just me singing in different registers. Then I met up with him and Kevin. It was riveting to hear my voice like that. I gave a couple of notes – very minor things – after hearing the initial version of the translation, and then they went back and modified. I really trusted Jordan and Kevin on how to make this authentic and respectful to Korean culture.

Is there an art to translating lyrics?

Lauv: Totally. When I was listening back to it, that’s what struck me. There’s certain parts that are so pleasing to the ear. I still love hearing the Korean version phonetically as someone from the outside. Certain parts of Kevin’s translation, like certain rhythm schemes, hit me so much harder than hearing it in English actually.

Do you foresee that there will be more opportunities for translators as this space develops?

Young: Absolutely. I call them songwriters more than translators though, actually. They play a huge role. I used to work with Beyonce as an engineer, and I’ve watched her do a couple songs in Spanish. It required a whole new vocal producer, a new team just to pull off those songs. It’s daunting to sing something that’s not your natural language. I even did some Korean background vocals myself on a BTS song I wrote. They provided me with the phonetics, and I can say it was honestly the hardest thing I’ve ever recorded. It’s hard to sing with the right emotion when you’re focused on pronouncing things correctly. But Hooky allows the artist to perform in other languages but with all the emotion that’s expected. Sure, there’s another songwriter doing the Korean performance, but Lauv was there for the whole process. His fingerprint is on it from beginning to end. I think this is the future of how music will be consumed. 

I think this could bring more opportunities for the mixing engineers too. When Dolby Atmos came out that offered more chances for mixers, and with the translations, I think there are now even more opportunities. I think it’s empowering the songwriter, the engineer, and the artist all at once. There could even be a new opportunity created for a demo singer, if it’s different from the songwriter who translated the song. 

Would you be open to making your voice model that you used for this song available to the public to use?

Lauv: Without thinking it through too much, I think my ideal self is a very open person, and so I feel like I want to say hell yeah. If people have song ideas and want to hear my voice singing their ideas, why not? As long as it’s clear to the world which songs were written and made by me and what was written by someone else using my voice tone. As long as the backend stuff makes sense, I don’t see any reason why not.