State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm

Current show
blank

State Champ Radio Mix

1:00 pm 7:00 pm


tech

Page: 65

HipHopWired Featured Video

Source: Anadolu Agency / Getty / Twitter
Elon Musk and Twitter are starting to look a little desperate as Threads continues to gain in popularity. Twitter is now dishing out payments to some content creators still on the platform.

Spotted on Engadget, Twitter’s “ad-revenue sharing program for creators” is a go, with some eligible Twitter Blue subscribers allegedly already getting a piece of that ad-revenue sharing pie in the form of payments.

The timing of the program’s rollout is quite convenient, but Phony Stark, aka Elon Musk, did tease the idea of the program in February while sharing as few details about how it will work. Some users have been sharing notifications from the platform informing them that payments are on the way.
One user shared that he received a $24,000 deposit based on the ads in the user’s replies.

Basically, the initiative is a way to keep popular content on Twitter and, at the same time, get more users to sign-up for the still very unpopular Twitter Blue subscription service.
But unfortunately, the program is only for Twitter users with at least five million post impressions in the past three months, and they must also be approved by a human moderator while adhering to Twitter’s Creator Subscriptions policies. Twitter will administer the payments through a Stripe account.
Twitter says it will soon drop an application process found under the Monetization hub in the account settings.
Twitter’s “Ad-Revenue Sharing Program Is Already Looking Suspicious
It hasn’t been three days, and Elon Musk’s “ad-revenue sharing program for creators” is already looking really shaky. The Washington Post reports that far-right influencers on the platform, including Andrew Tate, were first on the list to receive payments.
Per The Washington Post:
The first beneficiaries appear to be high-profile far-right influencers who tweeted before the announcement how much they’ve earned as part of the program. Ian Miles Cheong, Benny Johnson, and Ashley St. Claire all touted their earnings.

“Wow. Elon Musk wasn’t kidding. Content monetization is real,” tweeted an anonymous account called End Wokeness, with 1.4 million followers, accompanied by a screenshot showing earnings of over $10,400.

So far, many of the influencers who have publicly revealed that they’re part of the program are prominent figures on the right. Andrew Tate, for example, who was recently released from jail on rape and human trafficking charges, posted that he’d been paid over $20,000 by Twitter.
Again, this sounds like a desperate ploy to keep folks tweeting. We shall see how this works out for Musk and his platform.

Photo: Anadolu Agency / Getty

HipHopWired Radio
Our staff has picked their favorite stations, take a listen…

Universal Music Group general counsel/executive vp of business and legal affairs, Jeffery Harleston, spoke as a witness in a Senate Judiciary Committee hearing on AI and copyright on Wednesday (July 12) to represent the music industry. In his remarks, the executive called for a “federal right of publicity” — the state-by-state right that protects artists’ likenesses, names, and voices — as well as for “visibility into AI training data” and for “AI-generated content to be labeled as such.”

Harleston was joined by other witnesses including Karla Ortiz, a conceptual artist and illustrator who is waging a class action lawsuit against Stability AI; Matthew Sag, professor of artificial intelligence at Emory University School of Law; Dana Rao, executive vp/general counsel at Adobe; and Ben Brooks, head of public policy at Stability AI.

“I’d like to make four key points to you today,” Harleston began. “First, copyright, artists, and human creativity must be protected. Art and human creativity are central to our identity.” He clarified that AI is not necessarily always an enemy to artists, and can be used in “service” to them as well. “If I leave you with one message today, it is this: AI in the service of artists and creativity can be a very, very good thing. But AI that uses, or, worse yet, appropriates the work of these artists and creators and their creative expression, their name, their image, their likeness, their voice, without authorization, without consent, simply is not a good thing,” he said.

Second, he noted the challenges that generative AI poses to copyright. In written testimony, he noted the concern of “AI-generated music being used to generate fraudulent plays on streaming services, siphoning income from human creators.” And while testifying at the hearing, he added, “At Universal, we are the stewards of tens of thousands, if not hundreds of thousands, of copyrighted creative works from our songwriters and artists, and they’ve entrusted us to honor, value and protect them. Today, they are being used to train generative AI systems without authorization. This irresponsible AI is violative of copyright law and completely unnecessary.”

Training is one of the most contentious areas of generative AI for the music industry. In order to get an AI model to learn how to generate a human voice, a drum beat or lyrics, the AI model will train itself on up to billions of data points. Often this data contains copyrighted material, like sound recordings, without the owner’s knowledge or compensation. And while many believe this should be considered a form of copyright infringement, the legality of using copyrighted works as training data is still being determined in the United States and other countries.

The topic is also the source of Ortiz’s class action lawsuit against Stability AI. Her complaint, filed in California federal court along with two other visual artists, alleges that the “new” images generated by Stability AI’s Stable Diffusion model used their art “without the consent of the artists and without compensating any of those artists,” which they feel makes any resulting generation from the AI model a “derivative work.”

In his spoken testimony, Harleston pointed to today’s “robust digital marketplace” — including social media sites, apps and more — in which “thousands of responsible companies properly obtained the rights they need to operate. There is no reason that the same rules should not apply equally to AI companies.”

Third, he reiterated that “AI can be used responsibly…just like other technologies before.” Among his examples of positive uses of AI, he pointed to Lee Hyun [aka MIDNATT], a K-pop artist distributed by UMG who used generative AI to simultaneously release the same single in six languages using his voice on the same day. “The generative AI tool extended the artist’s creative intent and expression with his consent to new markets and fans instantly,” Harleston said. “In this case, consent is the key,” he continued, echoing Ortiz’s complaint.

While making his final point, Harleston urged Congress to act in several ways — including by enacting a federal right of publicity. Currently, rights of publicity vary widely state by state, and many states’ versions include limitations, including less protection for some artists after their deaths.

The shortcomings of this state-by-state system were highlighted when an anonymous internet user called Ghostwriter posted a song — apparently using AI to mimic the voices of Drake and The Weeknd –called “Heart On My Sleeve.” The track’s uncanny rendering of the two major stars immediately went viral, urging the music business to confront the new, fast-developing concern of AI voice impersonation.

A month later, sources told Billboard that the three major label groups — UMG, Warner Music Group and Sony Music — have been in talks with the big music streaming services to allow them to cite “right of publicity” violations as a reason to take down songs with AI vocals. Removing songs based on right of publicity violations is not required by law, so the streamers’ reception to the idea appears to be voluntary.

“Deep fakes, and/or unauthorized recordings or visuals of artists generated by AI, can lead to consumer confusion, unfair competition against the artists that actually were the original creator, market dilution and damage to the artists’ reputation or potentially irreparably harming their career. An artist’s voice is often the most valuable part of their livelihood and public persona. And to steal it, no matter the means, is wrong,” said Harleston.

In his written testimony, Harleston went deeper, stating UMG’s position that “AI generated, mimicked vocals trained on vocal recordings from our copyrighted recordings go beyond Right of Publicity violations… copyright law has clearly been violated.” Many AI voice uses circulating the internet involve users mashing up one previously released song topped with a different artist’s voice. These types of uses, Harleston wrote, mean “there are likely multiple infringements occurring.”

Harleston added that “visibility into AI training data is also needed. If the data on AI training is not transparent, the potential for a healthy marketplace will be stymied as information on infringing content will be largely inaccessible to individual creators.”

Another witness at the hearing raised the idea of an “opt-out” system so that artists who do not wish to be part of an AI’s training data set will have the option of removing themselves. Already, Spawning, a music-tech start-up, has launched a website to put this possible remedy into practice for visual art. Called “HaveIBeenTrained.com,’ the service helps creators opt-out of training data sets commonly used by an array of AI companies, including Stability AI, which previously agreed to honor the HaveIBeenTrained.com opt-outs.

Harleston, however, said he did not believe opt-outs are enough. “It will be hard to opt out if you don’t know what’s been opted in,” he said. Spawning co-founder Mat Dryhurst previously told Billboard that HaveIBeenTrained.com is working on an opt-in tool, though this product has yet to be released.

Finally, Harleston urged Congress to label AI-generated content. “Consumers deserve to know exactly what they’re getting,” he said.

Diego Gonzalez started making his own music in 2020, inspired in part by some of the tracks he loved from The Kid LAROI’s first album. “I was using GarageBand on my phone at the time,” he recalls. “I didn’t know what else to use.”

While killing time on TikTok, he came across posts from other artists praising BandLab, another free app that aims to make it easy for aspiring creators to create instrumental tracks and record vocals with a mobile phone. Gonzalez took to it quickly, especially the presets that add clarity and heft to a vocal. “You don’t need 1,000 buttons on there to make something sound good,” he says. With BandLab, he recorded his breakout hit, a mournful 6/8 ballad titled “You & I” that has more than 50 million Spotify streams.

For now, many of BandLab’s most successful users look outside the platform for beats. thekid.ACE, Luh Tyler and Gonzalez say they usually start by finding premade instrumentals on YouTube. “I’ll look up ‘indie-pop type beat’ or ‘R&B Daniel Caesar type beat,’ ” Gonzalez says. Then it’s a matter of seconds to download the right instrumental, open it in BandLab and “start thinking of random melodies,” explains thekid.ACE. He has made a pair of viral songs with BandLab, “Imperfect Girl” (7.3 million Spotify streams) and “Fun and Forget” (8.6 million).

Pop stars pay good money to vocal producers to adjust their pitch and stitch together the best parts of multiple takes. But BandLab lets users replicate a similar process with a few clicks, adding echo, toning down the “s” sounds and upping distortion. Built-in vocal preset options run from very specific — “Punchy Rap,” “Hype Vox” — to “let’s see what this does”: “70s Ballad,” “Sky Sound.” On top of that, “it’s insanely simple to make your own presets and adjust the reverb or the compressor,” thekid.ACE says. “Auto-Tune is super easy to do.”

SSJ Twiin, who has also enjoyed some viral success with BandLab tracks, recently started experimenting with a new panning feature that automatically throws his vocal from left to right. He’s also a fan of the harmony function that “takes your original vocal and layers it with that exact same vocal plus two semitones, another one plus four, another plus six and so on,” he says.

BandLab’s interface looks like a more cheery, streamlined version of a program like Pro Tools — each vocal or instrument track separated into a bright, clickable sound wave. “People will say BandLab is not a real [digital audio workstation],” SSJ Twiin notes. “But it’s getting to the point where there’s pretty much nothing you can’t do.”

Jacob Byrnes, director of creator relations and content strategy for the music strategy and tactics team at Universal Music Group, spends a good chunk of his day scrolling through TikTok. Last fall, he noticed a marked shift in the type of videos appearing on his For You page: “It all turned into screen captures of people playing productions they made on BandLab,” he says.

BandLab provides its 60 million-plus registered users, 40% of whom are women, with music-making software that includes an arsenal of virtual instruments, as well as the ability to automatically generate multipart vocal harmonies, record, sample and manipulate sound in myriad ways. It’s a toolbox that allows them to create professional-sounding recordings on their phones with surprising ease, transforming every civilian into a potential hit-maker. BandLab can also distribute music to streaming services, and it incorporates components of a social network: Musicians can create individual profiles, chat with one another, comment on their peers’ releases, solicit advice or break up a song into its component pieces and share those to crowdsource remixes.

The free app launched in 2016, but it has become almost inescapable over the last 12 months: 200 million videos tagged with #bandlab appeared on TikTok in April. The music industry has taken note of the ease with which users can make songs — “Labels love BandLab because it allows artists to create music for very cheap,” says one music attorney — and the velocity that some songs have picked up on streaming platforms. “There are random kids on there generating streams like crazy,” says Nima Nasseri, vp of A&R strategy at UMG. “Their monthly listeners are going from zero into the millions, and they’re doing it all from the palm of their hand.”

“It’s like other segments of the [music] internet that explode — one artist [broke] and now you’re seeing a ton of them go,” adds Jordan Weller, head of artist and investor relations at indify, a platform that helps independent acts find investors. “That’s what makes it attractive for the community. Now all of these other kids recognize that they can build careers off of BandLab — that it’s a potential pathway.”

The artists wielding BandLab are not stuck in one mode — Diego Gonzalez and d4vd enjoyed success with lovelorn ballads; Luh Tyler makes slippery, bass-heavy hip-hop; thekid.ACE favors breezy guitars; ThxSoMch trafficks in shades of post-punk. Several have landed record deals — Gonzalez with Island, d4vd with Darkroom/Interscope, Tyler with Motion Music/Atlantic, ThxSoMch with Elektra and thekid.ACE with APG — while d4vd and ThxSoMch have also landed on Billboard’s charts. (All are teenagers except ThxSoMch, an elder statesman of sorts at 21.) Other acts like SSJ Twiin and kurffew have picked up more than 15 million Spotify plays apiece while remaining independent.

Even BandLab’s CEO is surprised by this wave of breakthroughs. Meng Ru Kuok says he always hoped to have an artist chart with a song made on his platform, but “the fact that it already happened last year with d4vd” — whose “Romantic Homicide” peaked at No. 33 on the Billboard Hot 100 — “was ahead of schedule.”

When Meng co-founded BandLab, he wanted to capitalize on the technological shift “from a desktop ecosystem to a mobile one”; phones represented “a musical instrument in everybody’s pocket.” He also aimed to open up audio tools to the large swath of the global population that couldn’t afford iPhones, which came with another digital audio workstation, GarageBand. BandLab makes money by taking a cut for artist services like distribution and promotion.

Artists who favor BandLab say it is remarkably frictionless to cut a vocal and smear it with effects or whip up a loop. It also has an artificial intelligence-powered SongStarter function that can automatically generate musical ideas based on a few inputs, though none of the artists who spoke for this story use it. BandLab “is easier than GarageBand; everything is in front of your face,” says keltiey, whose racing, helium-addled “Need” has over 14 million streams on Spotify.

“The more convenient you make something, the more it is going to be adapted,” says Mike Caren, founder of the publisher and independent label APG and a producer. “I used to buy full recording studios for people — Pro Tools, interfaces, [$20,000] packages of equipment.” In contrast, BandLab is free and portable. “I encourage my artists to use the platform as a way to get down spontaneous vocal ideas,” Caren says. He thinks most artists still don’t fully understand how many different tools are available within BandLab’s suite of tech; Meng says that over 40% of users work with more than two “core creation features,” but he hopes to boost that number to 99%.

When he’s not playing Fortnite with more than a dozen fellow BandLab users, thekid.ACE generally records on his bed. The same goes for Tyler, who says the ability to cut vocals in solitude was part of BandLab’s initial attraction: “I used to be nervous to rap in front of people; I just wanted to be by myself.” ThxSoMch recorded the vocals for “Spit in My Face!” in his bathroom, according to a video he posted on TikTok, while keltiey prefers to use the closet. “Her clothes would be all around,” says Velencia Wallace, keltiey’s mother and manager. “She almost had a fort.”

Young artists who get used to working quickly on BandLab in the comfort of their homes may find it hard to kick the habit, even once they have access to professional recording studios. “As the artists become more prominent, the labels want to wean them off BandLab — they want them to actually go into the studio and work with legitimate producers,” the music attorney says. “But the kids don’t want to; they want to stick to BandLab. I’ve seen situations where kids turn down big session opportunities with prominent writers and producers in favor of just doing their thing on BandLab.”

Tyler uses a studio, but says that “if I haven’t been there in a minute, I’ll just record a song on BandLab. I don’t like writing, so I’ll just do it on there and rerecord it.”

Not everyone in the music industry is sold on BandLab. One senior executive, who requested anonymity to speak frankly, was impressed with the tech. “Kids have never sounded this good at home,” he says. But so far, he continues, artists using BandLab haven’t become recognizable stars. While some of the songs stream, he notes, the acts behind them remain “faceless.” (This criticism is common in the streaming era.) In addition, the executive points out that posting BandLab sessions on TikTok has become so common that it might reach a point of oversaturation and lose steam, like previous trends before it.

Meng acknowledges there are doubters who think “this a fad.” But he’s quick to offer a rebuttal. “There are billions of people around the world who don’t have access to music-making on their mobile devices,” he says, warming to his theme. “We’re just starting to scratch the surface. There’s a lot more to come.”

HipHopWired Featured Video

Source: SOPA Images / Getty / Threads
Is it a wrap for Twitter? Instagram’s Threads swiftly surpassed the 100 million users milestone.
Spotted on The Verge via Mark Zuckerberg’s Threads profile, the platform explicitly created to rival Twitter looks like a massive success for Meta.
The Threads app surpassed 100 million users faster than OpenAI’s chatbot, ChatGPT, which accomplished the feat in two months. It only took Instragram’s Threads mere days to reach that goal in a matter of days following its early Wednesday launch last week.

Per The Verge:

Threads proved to be an early hit almost immediately. In the first two hours, it hit 2 million users and steadily climbed from there to 5 million, 10 million, 30 million, and then 70 million. The launch has been “way beyond our expectations,” CEO Mark Zuckerberg said on Friday.

On Monday, Zuckerberg said in a Threads post confirming the milestone that the growth was “mostly organic:”
Adam Mosseri, the head of Instagram, followed Zuckerberg, noting that it only took five days to reach the staggering number of users.
Now, whether that was achieved “organically” is another story. Before its launch, Threads was heavily pushed to the over 1 billion people using Instagram, allowing them to transfer their IG accounts quickly to the new platform. So we are sure that also significantly increased the number of people signing up to use Threads.
Users are also threading it up. According to The Verge, there have “been more than 95 million posts and 190 million likes shared on the app.”
Threads Accomplishing A Goal Adam Mosseri Claims It Doesn’t Want To Do
Despite these impressive numbers, Mosseri stated in a Threads post that his platform is not trying to replace Twitter and will not actively push politics or hard news. But you can’t stop users from talking about what they want to, and hard news is finding its way onto Threads.
Also, if its mission is not to replace Twitter, it seems to be failing at that mission. With some help from Elon Musk, Twitter’s traffic is reportedly “tanking,” according to CloudFlare CEO Matthew Prince.
Twitter has been telling whatever advertisers it has left, probably Cheech and Chong, whose gummy ads are flooding Twitter users’ timelines, that it has “535 million monetizable monthly active users,” according to The Wall Street Journal. 
Prince’s claims say otherwise.
Right now, it’s looking like Twitter is dying a slow death. Twitter better hope that the lawsuit bears fruit. But we are here for anything hurting Elon Musk’s pockets.

Photo: SOPA Images / Getty / Threads

HipHopWired Radio
Our staff has picked their favorite stations, take a listen…

All products and services featured are independently chosen by editors. However, Billboard may receive a commission on orders placed through its retail links, and the retailer may receive certain auditable data for accounting purposes.
Early Prime Day deals are giving us just a taste of what we can expect Tuesday (July 11) and Wednesday (July 12) when Prime Day officially hits Amazon. While the big shopping event will offer a slew of sales on hot ticketed items, the days leading up to Prime Day also include heavily slashed prices on tech items, home finds, fashion favorites and even TikTok beauty alternatives. One of the major discounted deals so far? An Apple Macbook Air, which is currently 25% off.

Explore

Explore

See latest videos, charts and news

See latest videos, charts and news

Normally, the MacBook Air is priced at $999, but the deal slashes the price by $250, leaving it at a more wallet-friendly $750. If you’ve been dreaming of owning your own MacBook, are headed to college in the fall or have an out-of-date laptop, this sale is definitely worth jumping on.

It’s not just reserved for the space gray version either; you can snag 25% off the other two shades it comes in, including gold and silver.

Keep scrolling to shop the early Prime Day deal.

Amazon

Apple 2020 MacBook Air Laptop
$749.99 $999.00 25% OFF

Apple’s 2020 MacBook Air isn’t just a sleek-looking piece of tech, but features a 13.3-inch screen with retina display and 8 GB of ram memory for storing photos, documents and more. The slim and lightweight design makes it easy to transport from place to place making it one of your travel necessities. It’s been rated 4.8 stars on Amazon with over 17,500 shoppers giving it five stars. One person even described it as “everything you could ask for in a laptop”; it’s that good.

Not only can you surf the Web, but you’ll be able to binge watch shows on streamers including Apple TV+, Prime Video, Netflix, Hulu, Paramount+ and more. If you’re in the mood for jamming out to your latest playlist, the crisp speakers will provide a clear listening experience.

For more product recommendations, check out our roundups of the best Apple Airpod deals, TV deals and portable chargers.

HipHopWired Featured Video

CLOSE

Source: EA/ Cliffhanger Games / Black Panther
It’s official, de Bleck Pantha is getting his own standalone video game.
Wakanda Forever!
On the heels of the 57th anniversary of the Black Panther’s Marvel Comics’ debut, Monday, July 10, Cliffhanger Games, a new studio, is working on a third-person video game for the iconic superhero.
Reports of T’Challa’s standalone video game adventure first hit the internet in July 2022. Now we know the new triple-A development studio based in Seattle is up to delivering a Black Panther video game that will hopefully be up in the ranks with other Marvel video games like Spider-Man, Spider-Man 2, and criminally slept on Guardians of The Galaxy.

Per Marvel, the studio’s mission “is to build an expansive and reactive world that empowers players to experience what it is like to take on the mantle of Wakanda’s protector, the Black Panther.”
Development on the game will be led by Kevin Stephens (Monolith Productions), with a team full of talent who worked on popular titles like Middle-earth: Shadow of Mordor, Halo Infinite, God of War, Call of Duty, and others.
“We’re dedicated to delivering fans a definitive and authentic Black Panther experience, giving them more agency and control over their narrative than they have ever experienced in a story-driven video game. Wakanda is a rich Super Hero sandbox, and our mission is to develop an epic world for players who love Black Panther and want to explore the world of Wakanda as much as we do,” said Stephens.
Cliffhanger Games & Marvel Are Working Closely Together On Black Panther
When it comes to comic book IP, fans are demanding to please because they expect either the movies, original series, or games to meet the standards of the comics.
Smith ensures they will hit that mark, confirming Cliffhanger Games and Marvel have been working closely together on the game, stating “to ensure that we craft every aspect of Wakanda, its technology, its heroes, and our own original story with the attention to detail and authenticity that the world of Black Panther deserves.”
He continues, “It’s an incredibly rare opportunity to build a new team around the values of diversity, collaboration, and empowerment,” said Kevin. “We want our game to enable players to feel what it’s like to be worthy of the Black Panther mantle in unique, story-driven ways, and we want Cliffhanger Games to empower everyone on our team as we collaborate to bring this amazing world to life.”
Reception To The News
Unsurprisingly, fans are happy to hear about Black Panther officially getting a video game. Take that superhero fatigue. We got a taste of what it’s like to control King T’Challa in Crystal Dynamic’s disappointing Marvel’s Avengers in a well-received War For Wakanda expansion.
There is also the news of the mysterious Captain America and Black Panther video game.
If we have any concerns, we want to see all of the Black and Brown video game development talent working on this title, and it’s only fitting. Now when it comes to that, the pickings are slim. We know that we are not naive, but they are out there, and to see concerted effort would be very much appreciated.
So far, it looks like Cliffhanger Games is on the right track. Jercye Dianingana, a 3D Senior Environment Artist II at Cliffhanger Games, was happy to reveal he was working on the Black Panther game.

We love to see it because we are rooting for everyone Black. ALWAYS!
You can see more reactions in the gallery below.

Photo: EA/ Cliffhanger Games / Black Panther

2. Let’s gooooo

4. The deets

5. This better not be the case

6. We all are.

All products and services featured are independently chosen by editors. However, Billboard may receive a commission on orders placed through its retail links, and the retailer may receive certain auditable data for accounting purposes.
2K is celebrating its 25th anniversary by unveiling its latest edition of the popular NBA gaming series — and it’s officially available for preorder. The new NBA 2K24 will not only feature a slam dunk-worthy lineup of players to choose from, but comes in multiple covers, including two with the legendary Kobe Bryant on front, which you can collect.

Four editions of the game were made for consoles including PS5, PS4, Xbox consoles, Nintendo Switch and PC. You can collect or choose between the Black Mamba version or Kobe Bryant edition of the game, which will feature past and current-gen versions of the game.

Pricing will defer depending on the console and version of the game you choose.

Keep reading to preorder the game and ensure you have a copy by the time it’s released.

GameStop

NBA 2K24 – Kobe Bryant Edition
$From $59.99

The Kobe Bryant edition will have you go back to the athlete’s early career days and work your way up to stardom level. Card-collecting mode has returned, which comes with hours of customizable options. The Kobe Bryant edition only comes with the base game with the option to purchase the past-gen or current-gen version of the game.

GameStop

NBA 2K24 – Black Mamba Edition
$99.99

The Black Mamba version comes with the same base game as the Kobe Bryant edition, but will have additional bonuses including 100K VC, 15K MyTEAM points, 2K24 starting five draft box with three option packs, 10 box MyTEAM promo packs, a cover star sapphire card, on diamond shoe, 2-hour double XP coin and more.

For more product recommendations, check out our roundups of the best Legend of Zelda: Tears of the Kingdom merch, tech deals from Walmart and over-ear headphones.

HipHopWired Featured Video

Source: Anadolu Agency / Getty / Threads / Twitter
Elon Musk and his hot mess of a social media platform, Twitter, are looking salty in these digital streets after threatening to sue Meta for allegedly biting Twitter with Threads.

Spotted on The Verge, it looks like Elon Musk is shaking in his Allbirds following Threads’ successful launch and looks to be a strong contender to knock out the bird app.

In a letter addressed to Meta CEO Mark Zuckerberg obtained by Semafor, Twitter lawyer Alex Spiro alleges that Meta is using te company’s trade secrets and intellectual property when making bringing Threads to life and is threatening legal action in “both civil remedies and injunctive relief.”
Per The Verge:

Spiro, who is also Elon Musk’s personal lawyer and a partner at the Quinn Emanuel law firm, claims that Meta hired “dozens” of ex-Twitter employees to develop Threads, which wouldn’t be all that surprising given just how many people were fired following Musk’s takeover.

But according to Twitter, many of these former workers still have access to Twitter’s trade secrets and other confidential information. Twitter alleges that Meta took advantage of this and tasked these employees with developing a “copycat” app “in violation of both state and federal law.”
In response to the claims, the communications director for Meta, Andy Stone, said, “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing.”
Meta also doesn’t seem phased by Musk’s threat to sue, being that is usually the course of action the company seems to take, most recently threatening Microsoft with a lawsuit for allegedly abusing Twitter’s API.
In response to the letter, Musk said, “Competition is fine, cheating is not.”

Threads Is Winning Out The Gate

Musk and his company’s lawsuit comes on the heels of Thread’s incredible launch that saw over 10 million users eager to ditch Musk’s platform signup.
According to The Verge, Threads has over 30 million registered users, including big names like Kim Kardashian, Khloé Kardashian, J.Lo, and more already on board with the app.


Photo: Anadolu Agency / Getty

HipHopWired Radio
Our staff has picked their favorite stations, take a listen…

From ChatGPT writing code for software engineers to Bing’s search engine sliding in place of your bi-weekly Hinge binge, we’ve become obsessed with the capacity for artificial intelligence to replace us.

Within creative industries, this fixation manifests in generative AI. With models like DALL-E generating images from text prompts, the popularity of generative AI challenges how we understand the integrity of the creative process: When generative models are capable of materializing ideas, if not generating their own, where does that leave artists?

Google’s new text-based music generative AI, MusicLM, offers an interesting answer to this viral terminator-meets-ex-machina narrative. As a model that produces “high-fidelity music from text descriptions,” MusicLM embraces moments lost in translation that encourages creative exploration. It sets itself apart from other music generation models like Jukedeck and MuseNet by inviting users to verbalize their original ideas rather than toggle with existing music samples.

Describing how you feel is hard

AI in music is not new. But between recommending songs for Spotify’s Discover Weekly playlists to composing royalty free music with Jukedeck, applications of AI in music have evaded the long-standing challenge of directly mapping words to music.

This is because, as a form of expression on its own, music resonates differently to each listener. The same way that different languages struggle to perfectly communicate nuances of respective cultures, it is difficult (if not impossible) to exhaustively capture all dimensions of music in words.

MusicLM takes on this challenge by generating audio clips from descriptions like “a calming violin melody backed by a distorted guitar riff,” even accounting for less tangible inputs like “hypnotic and trance-like.” It approaches this thorny question of music categorization with a refreshing sense of self awareness. Rather than focusing on lofty notions of style, MusicLM grounds itself in more tangible attributes of music with tags such as “snappy”, or “amateurish.” It broadly considers where an audio clip may come from (eg. “Youtube Tutorial”), the general emotional responses it may conjure (eg. “madly in love”), while integrating more widely accepted concepts of genre and compositional technique.

What you expect is (not) what you get

Piling onto this theoretical question of music classification is the more practical shortage of training data. Unlike its creative counterparts (e.g. DALL-E), there isn’t an abundance of text-to-audio captions readily available.

MusicLM was trained by a library of 5,521 music samples captioned by musicians called ‘MusicCaps.’ Bound by the very human limitation of capacity and the almost-philosophical matter of style, MusicCaps offers finite granularity in its semantic interpretation of musical characteristics. The result is occasional gaps between user inputs and generated outputs: the “happy, energetic” tune you asked for may not turn out as you expect.

However, when asked about this discrepancy, MusicLM researcher Chris Donahue and research software engineer Andrea Agostinelli celebrate the human element of the model. They describe primary applications such as “[exploring] ideas more efficiently [or overcoming] writer’s block,” quick to note that MusicLM does offer multiple interpretations of the same prompt — so if one generated track fails to meet your expectations, another might.

“This [disconnect] is a big research direction for us, there isn’t a single answer,” Andrea admits. Chris attributes this disconnect to the “abstract relationship between music and text” insisting that “how we react to music is [even more] loosely defined.”

In a way — by fostering an exchange that welcomes moments lost in translation — MusicLM’s language-based structure positions the model as a sounding board: as you prompt the model with a vague idea, the generation of approximates help you figure out what you actually want to make.

Beauty is in breaking things

With their experience producing Chain Tripping (2019) — a Grammy-nominated album entirely made with MusicVAE (another music generative AI developed by Google) — the band YACHT chimes in on MusicLM’s future in music production. “As long as it can be broken apart a little bit and tinkered with, I think there’s great potential,” says frontwoman Claire L. Evans.

To YACHT, generative AI exists as a means to an end, rather than the end in itself. “You never make exactly what you set out to make,” says founding member Jona Bechtolt, describing the mechanics of a studio session. “It’s because there’s this imperfect conduit that is you” Claire adds, attributing the alluring and evocative process of producing music to the serendipitous disconnect that occurs when artists put pen to paper.

The band describes how the misalignment of user inputs and generated work inspires creativity through iteration. “There is a discursive quality to [MusicLM]… it’s giving you feedback… I think it’s the surreal feeling of seeing something in the mirror, like a funhouse mirror,” says Claire. “A computer accent,” band member Rob Kieswetter jokes, referencing a documentary about the band’s experience making Chain Tripping.

However, in discussing the implications of this move to text-to-audio generation, Claire cautions the rise of taxonomization in music: “imperfect semantic elements are great, it’s the precise ones that we should worry about… [labels] create boundaries to discovery and creation that don’t need to exist… everyone’s conditioned to think about music as this salad of hyper-specific genre references [that can be used] to conjure a new song.”

Nonetheless, both YACHT and the MusicLM team agrees that MusicLM — as it currently is — holds promise. “Either way there’s going to be a whole new slew of artists fine-tuning this tool to their needs,” Rob contends.

Engineer Andrea recalls instances where creative tools weren’t popularized for its intended purpose: “the synthesizer eventually opened up a huge wave of new genres and ways of expression. [It unlocked] new ways to express music, even for people who are not ‘musicians.’” “Historically, it has been pretty difficult to predict how each piece of music technology will play out,” researcher Chris concludes.

Happy accidents, reinvention, and self-discovery

Back to the stubborn, unforgiving question: Will generative AI replace musicians? Perhaps not.

The relationship between artists and AI is not a linear one. While it’s appealing to prescribe an intricate and carefully intentional system of collaboration between artists and AI, as of right now, the process of using AI in producing art resembles more of a friendly game of trial and error.

In music, AI gives room for us to explore the latent spaces between what we describe and what we really mean. It materializes ideas in a way that helps shape creative direction. By outlining these acute moments lost in translation, tools like MusicLM sets us up to produce what actually ends up making it to the stage… or your Discover Weekly.

Tiffany Ng is an art & tech writer based in NYC. Her work has been published in i-D Vice, Vogue, South China Morning Post, and Highsnobiety.