tech
Page: 71
Right now, our artificial intelligence future sure seems to look a lot like… Wes Anderson movies! Over the past week, various AI programs have used the director’s quirky style to frame TikTok posts, rethink the looks of movies and even, more recently, make a trailer for a fictitious reboot of Star Wars. The future may be creepy, but at least it looks color-saturated and carefully composed.
The fake, fan-made Star Wars trailer, appropriately subtitled “The Galactic Menagerie,” is great fun, and its viral success shows both the strengths and current limitations of AI technology. Anderson’s distinctive visual style is an important part of his art, and the ostensible mission to “steal the Emperor’s artifact” sounds straight out of Star Wars. But the original Star Wars captured the imaginations of so many fans because it suggested a future that had some sand in its gears – the interstellar battle station had a trash compactor, and the spaceport cantina had a live band (and, one assumes, a public performance license).
Right now, at least, AI can’t seem to get past the surface.
“Heart on My Sleeve,” the so-called “Fake Drake” track apparently made with an artificial intelligence-generated version of Drake’s vocals, also sounds perfectly polished precisely in-tune and on-tempo. So do most modern pop songs, which tend to be pitch-corrected and sonically tweaked. (Most modern pop isn’t recorded live in a studio so much as assembled on a computer, so why shouldn’t it sound that way?) It’s hard to tell exactly why this style became so popular – the ease of smoothing over mistakes, the temptation of technical perfection, the sheer availability of samples and beats – but it’s what the mass streaming audience seems to want.
It’s also the kind of music that AI can most easily imitate. AI can already create pitch-perfect vocals, right-on-the-beat drumming, the kind of airless perfection of the Wes Anderson Star Wars trailer. It’s harder to learn a particular creator’s style – the phrasing and delivery that set singers apart as much as their voices do. So far, many of the songs online that have AI-generated voices seem to have put it on top of the old singer’s words, although most pop music is less about technical excellence than style of delivery. And quirks of timing and emphasis are even harder to imitate.
Most big new pop stars are short on quirks, but they might do well to develop them. Whatever laws and agreements eventually regulate AI – and it pains me to point out that the key word there is eventually – artists will still end up competing with algorithms. And since algorithms don’t need to eat or sleep, creators are going to have to do something that they can’t. One of those things, at least for now, is embracing a certain amount of imperfection. Computers will catch up, of course – if they can avoid mistakes, they can certainly learn to make a few – but that could take some time.
Until relatively recently, most great artists had quirks: Led Zeppelin drummer John Bonham played a bit behind the beat, Snoop Dogg started drawling out verses at a time when most rappers fired them off, and Willie Nelson has a sense of phrasing that owes more to jazz than rock. (Nelson’s timing is going to be hard for algorithms to imitate until they start smoking weed.) In most cases, these quirks are strengths – Bonham’s drumming made Zeppelin swing. But many producers came to see these kinds of imperfections as relics of an age when correcting them was difficult and the sound of pop changed so much that they now stick out like sore thumbs.
I don’t mean to romanticize the past. And newer artists have quirks, too – they just tend to smooth them over with studio software. But this kind of artificial perfection is easier to imitate. So, I wonder if the rise of AI – not the parodies we’re seeing so far, but the flood of computer-created pop that’s coming – will push musicians to embrace a rougher, messier aesthetic.
Most artists wouldn’t admit to this, of course – acknowledging commercial pressure is usually considered uncool. But big-picture shifts in the market have always shaped the sound of pop music. Consider how many artists created 35-to-45-minute albums in the ’60s and ’70s, and then 60-to-75-minute albums in the ’90s. Were they almost twice as inspired, or did the amount of music that fit on a CD – and the additional mechanical royalties they could make if they had songwriting credit – drive them to create more? These days, presumably also for economic reasons, songs are getting shorter and albums are getting longer.
It will be interesting to see if they also get a bit rougher, too. In Star Wars, at least, the future isn’t all about a sparkling surface.
For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.
Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.
Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.
There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.
HipHopWired Featured Video
Source: SOPA Images / Getty / Bluesky
Since Elon Musk acquired Twitter, he has been doing a fantastic job making it an awful experience, leaving users screaming for a new place to debate $200 debates. Well, their prayers might have finally been answered.
Many of you have noticed on your Twitter timelines people talking about Bluesky, pondering where they can secure an invite leaving you wondering what the hell they are talking about.
Well, Bluesky could be the app that officially puts Twitter in the social media hospice, joining the likes of MySpace, Tumblr, and other social media platforms hanging on for dear life.
Bluesky is backed by Twitter’s former owner, Jack Dorsey, and it has the buzz Twitter first had. Per the New York Times, Bluesky could be Twitter 2.0 and is already boasting users like Representative Alexandria Ocasio-Cortez and Chrissy Teigen, to name a few, with thousands begging for invites to get in on the action.
Those who have the privilege of using Bluesky say that the app comes the closest to giving users the feeling Twitter used to give before Elon Musk messed it all up.
Bluesky has all of the core features that made Twitter popping, like the ability to post short text, photo updates, reply, and share each other’s posts.
How Does Bluesky Differ From Twitter?
Bluesky’s chief executive, Jay Graber, spoke on what makes his app different from Elon Musk’s Twitter in a blog post, noting it will be a “decentralized system” that will eventually allow users to create their own apps and build their own communities within Bluesky.
According to Ms. Graber, this design is because an “individual could create rules for the entire Bluesky community,” the New York Times reports.
Also, Bluesky is “open protocol,” which is unusual regarding social apps. This means that Bluesky could allow for cross-posting between different social media platforms. This is something that Twitter used to do before becoming “walled off.” For example, there was a time when Instagram links populated Twitter timelines, showing the post. Now, if you share an IG post on their Twitter account, it just shows a link.
Sounds lit, so how do you sign up? Right now, if you want to use Bluesky, you can only do so if you receive an invite from someone already using the app while it’s in its testing phase.
Bluesky is available for download on iOS and will come to Android devices soon. You can sign up to be on the waiting list by heading here.
—
Photo: SOPA Images / Getty
HipHopWired Featured Video
Pharrell Pack, a new collection of digital, physical wearables and redeemables, was curated by Skateboard P himself. Merging the worlds of tech and fashion, a natural space for Pharrell Williams, the Pharrell Pack offers buyers a variety of stylish and fun options.
The Pharrell Pack is a collaboration with Doodles, which offers a selection of physical wearables, toys, and other items alongside digital countering items. With Pharrell’s eye for fashion and style, several brands connected to the Virginia Beach, Va. native such as BBC, Ice Cream, adidas, and Humanrace. Doodles recently launched a test version of its The Stoodio, a digital home that was born from the web3 world.
Via The Stoodio, buyers can redeem their Pharrell Pack and put new fashion flourishes on their Doodles 2 avatar with the custom digital wearables. The Pharrell Pack includes three digital wearables, 1 redeemable, and one beta pass. 48 of the 300 packs will have a grail redeemable and matching grail wearable. 252 packs will feature non-green sambas as redeemable along with exclusive wearables.
If that isn’t enough, 12 Doodlees will get a redeemable for a limited-edition Pharrell 50th birthday edition green Sambas. These Sambas will not be for sale anywhere else.
To learn more about the Pharell Pack and how to snag your own, please click here.
—
Photo: Doodles/Pharrell Williams
HipHopWired Featured Video
Source: YOSHIKAZU TSUNO / Getty / PlayStation
It looks like PlayStation isn’t out of the handheld business after all.
Insider Gaming exclusively reports PlayStation is actively working on a new handheld console called the “Q Lite.” The video game website says it will not center around cloud gaming but will be an extension of the PS5 utilizing the console’s remote play function that the company has been hyping up as of late.
[embedded content]
Per Insider Gaming:
Codenamed the Q Lite, the next PlayStation handheld is the next piece of Sony hardware that aims to be yet another piece of hardware that requires the PlayStation 5. Insider Gaming understands that the Q Lite is not a cloud-streaming device but instead uses Remote Play with the PlayStation 5.
According to the report, the Q Lite will feature adaptive streaming up to 1080p and 60FPS and require constant internet connectivity.
As for its look, Insider Gaming says it will look like a PS5 DualSense controller with “a massive 8-inch LCD touchscreen in the center.” It will also have adaptive triggers and support haptic feedback, plus all of the bells and whistles you come to expect for a portable gaming device.
The Q Lite Could Be A Part of The Second Phase of The PS5
The news of the Q Lite follows reports of Sony working on a PlayStation 5 Pro and PS5 console with a detachable disc drive, and it will arrive before both consoles.
All three devices could be a part of the “second phase of the PS5,” according to Jeff Grubb’s reporting. PlayStation is also said to be working on wireless earbuds called “Project Nomad,” a wireless headset called “Project Voyager,” and The Q Lite could all be coming very soon.
Sony officially discontinued its last portable device, the PS Vita, in 2019, leaving the impression the company is done with handheld consoles.
That appears not to be the case. Are you excited about Q Lite? Let us know in the comment section below.
—
Photo: YOSHIKAZU TSUNO / Getty
For the last week, the most talked-about song in the music business has been “Heart on My Sleeve,” the track said to have been created by using artificial intelligence to imitate vocals from Drake and The Weeknd and uploaded to TikTok by the user Ghostwriter977. And while most reactions were impressed, there was a big difference between those of fans (“This isn’t bad, which is pretty cool!”) and executives (“This isn’t bad, which is really scary!”). As with much online technology, however, what’s truly remarkable, and frightening, isn’t the quality – it’s the potential quantity.
This particular track didn’t do much damage. Streaming services pulled it down after receiving a request from Universal Music Group, for which both Drake and The Weeknd record. YouTube says the track was removed because of a copyright claim, and “Heart on My Sleeve” contains at least one obvious infringement in the form of a Metro Boomin producer tag. But it’s not as clear as creators and rightsholders might like that imitating Drake’s voice qualifies as copyright infringement.
In a statement released around the time the track was taken down, Universal said that “the training of generative AI using our artists’ music” violated copyright. But it’s a bit more complicated than that. Whether that’s true in the U.S. depends on whether training AI with music qualifies as fair use – which will not be clear until courts rule on the matter. Whether it’s true in other countries depends on local statutory exceptions for text and data mining that vary in every country. Either way, though, purposefully imitating Drake’s voice would almost certainly violate his right to what an American lawyer might call his right of publicity but a fan would more likely call his artistic identity. There are precedents for this: A court held that Frito-Lay violated the rights of Tom Waits by imitating his voice for a commercial, and Bette Midler won a similar lawsuit against Ford. Both of those cases involved an implied endorsement – the suggestion of approval where none existed.
The violation of an AI imitation is far more fundamental, though. The essence of Drake’s art – the essence of his Drakeness, if you will – is his voice. (That voice isn’t great by any technical definition, any more than Tom Waits’ is, but it’s a fundamental part of his creativity, even his very identity.) Imitating that is fair enough when it comes to parody – this video of takes on Bob Dylan‘s vocal style seems like it should be fair game because it’s commenting on Dylan instead of ripping him off – but creating a counterfeit Drake might be even more of a moral violation than a commercial one. Bad imitators may be tacky, but people tend to regard very accurate ones as spooky. “Heart on My Sleeve” isn’t Drake Lite so much as an early attempt at Drakenstein – interesting to look at, but fundamentally alarming in the way it imitates humanity. (Myths and stories return to this theme all the time, and it’s hard to think of many with happy endings.) Universal executives know that – they have talked internally about the coming challenges of AI for years – which is why the company’s comment asked stakeholders “which side of history” they want to be on.
This track is just the sign of a coming storm. The history of technology is filled with debates about when new forms of media and technology will surpass old ones in terms of quality when it often matters much more about how cheap and easy they are. No one thinks movies look better on a phone screen than in a theatre, but the device is right there in your hand. Travel agents might be better at booking flights than Expedia, but – well, the fact that there aren’t that many of them anymore makes my point. Here, the issue isn’t whether AI can make a Drake track better than Drake – which is actually impossible by definition, because a Drake track without Drake isn’t really a Drake track at all – but rather how much more productive AI can be than human artists, and what happens once it starts operating at scale.
Imagine the most prolific artist you can think of – say, an insomniac YoungBoy Never Broke Again crossed with King Gizzard & the Lizard Wizard. Then imagine that this hypothetical artist never needs to eat or sleep or do anything else that interferes with work. Then imagine that he – or, really, it – never varies from a proven commercial formula. Now clone that artist thousands of times. Because that’s the real threat of AI to the music business – not the quality that could arrive someday but the quantity that’s coming sooner than we can imagine.
It has been said that 100,000 tracks get uploaded to streaming services every day. What happens once algorithms can make pop music at scale? Will that turn into a million tracks a day? Or 100 million? Could music recorded by humans become an exception instead of a rule? In the immediate future, most of this music wouldn’t be very interesting – but the sheer scale and inevitable variety could cut into the revenue collected by creators and rightsholders. The music business doesn’t need an umbrella – it needs a flood barrier.
In the long run, that barrier should be legal – some combination of copyright, personality rights and unfair competition law. That will take time to build, though. For now, streaming services need to continue to work with creators and rightsholders to make clear the difference between artists and their artificial imitators.
Fans who want to hear Drake shouldn’t have to guess which songs are really his.
For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.
HipHopWired Featured Video
CLOSE
Source: Christopher Furlong / Getty / Twitter
Many thought the so-called “Chief Twit” was bluffing when he said he would snatch away people’s blue checkmarks if they didn’t subscribe to Twitter Blue. Well, he did it, and, of course, it was an absolute sh*t show.
While people were puff, puff, passing on 4/20, Phony Stark, aka Elon Musk, did start taking away legacy blue checkmarks. One by one, celebrities and other notable figures began to point out that their blue checkmarks were gone vowing to never subscribe to Twitter Blue and pointing out to their followers, “they have been verified.”
But, some also noticed that other celebrities still had their blue checkmarks well after the social media platform took away legacy checkmarks. TMZ reports that Soulja Boy, Khloe Kardashian, Taylor Swift, O.J. Simpson, Arnold Schwarzenegger, Ryan Reynolds, The Weeknd, Sia, Nick Carter, LL Cool J, and John Cena still have their blue checks potentially meaning they are paying the $8 subscription price.
When you click the blue checks next to their names, the description reads, “This account is verified because they are subscribed to Twitter Blue and verified their phone number.”
Remember that a blue checkmark is not the only perk of a Twitter Blue subscription. You can also post longer videos, drop longer tweets and edit them after you hit send.
Elon Musk Is Paying For Certain Celebs Twitter Blue Subscriptions
But there is also some funny business going on as well. Some celebs, like author Stephen King who still has his blue checkmark, immediately alerted his followers that he is not paying for Twitter Blue despite having his still.
LeBron James, who is notoriously cheap, told his followers that he was not paying to keep his blue checkmark also still has his.
“LeCap,” basically calling Bron Bron a liar, began trending, but it turns out there was a reason he and Stephen King were exemptions to the new stupid rule.
Elon Musk was personally paying for their subscriptions out of his pocket. Musk admitted as much via his account, adding that he also paid for William Shatner’s blue checkmark.
What a mess.
You can see more reactions to the blue checkmark ridiculousness in the gallery below.
—
Photo: Christopher Furlong / Getty
2. That counts
When Universal Music Group emailed Spotify, Apple Music and other streaming services in March asking them to stop artificial-intelligence companies from using its labels’ recordings to train their machine-learning software, it fired the first Howitzer shell of what’s shaping up as the next conflict between creators and computers. As Warner Music Group, HYBE, ByteDance, Spotify and other industry giants invest in AI development, along with a plethora of small startups, artists and songwriters are clamoring for protection against developers that use music created by professionals to train AI algorithms. Developers, meanwhile, are looking for safe havens where they can continue their work unfettered by government interference.
To someday generate music that rivals the work of human creators, AI models use a process of machine-learning to identify patterns in and mimic the characteristics that make a song irresistible, like that sticky verse-chorus structure of pop, the 808 drums that define the rhythm of hip-hop or that meteoric drop that defines electronic dance. These are distinctions human musicians have to learn during their lives either through osmosis or music education.
Machine-learning is exponentially faster, though; it’s usually achieved by feeding millions, even billions of so-called “inputs” into an AI model to build its musical vocabulary. Due to the sheer scale of data needed to train current systems that almost always includes the work of professionals, and to many copyright owners’ dismay, almost no one asks their permission to use it.
Countries around the world have various ways of regulating what’s allowed when it comes to what’s called the text and data mining of copyrighted material for AI training. And some territories are concluding that fewer rules will lead to more business.
China, Israel, Japan, South Korea and Singapore are among the countries that have largely positioned themselves as safe havens for AI companies in terms of industry-friendly regulation. In January, Israel’s Ministry of Justice defined its stance on the issue, saying that “lifting the copyright uncertainties that surround this issue [of training AI generators] can spur innovation and maximize the competitiveness of Israeli-based enterprises in both [machine-learning] and content creation.”
Singapore also “certainly strives to be a hub for AI,” says Bryan Tan, attorney and partner at Reed Smith, which has an office there. “It’s one of the most permissive places. But having said that, I think the world changes very quickly,” Tan says. He adds that even in countries where exceptions in copyright for text and data mining are established, there is a chance that developments in the fast-evolving AI sector could lead to change.
In the United States, Amir Ghavi, a partner at Fried Frank who is representing open-source text-to-image developer Stability AI in a number of upcoming landmark cases, says that though the United States has a “strong tradition of fair use … this is all playing out in real time” with decisions in upcoming cases like his setting significant precedents for AI and copyright law.
Many rights owners, including musicians like Helienne Lindevall, president of the European Composers and Songwriters Alliance, are hoping to establish consent as a basic practice. But, she asks, “How do you know when AI has used your work?”
AI companies tend to keep their training process secret, but Mat Dryhurst, a musician, podcast host and co-founder of music technology company Spawning, says many rely on just a few data sets, such as Laion 5B (as in 5 billion data points) and Common Crawl, a web-scraping tool used by Google. To help establish a compromise between copyright owners and AI developers, Spawning has created a website called HaveIBeenTrained.com, which helps creators determine whether their work is found in these common data sets and, free of charge, opt out of being used as fodder for training.
These requests are not backed by law, although Dryhurst says, “We think it’s in every AI organization’s best interest to respect our active opt-outs. One, because this is the right thing to do, and two, because the legality of this varies territory to territory. This is safer legally for AI companies, and we don’t charge them to partner with us. We do the work for them.”
The concept of opting out was first popularized by the European Union’s Copyright Directive, passed in 2019. Though Sophie Goossens, a partner at Reed Smith who works in Paris and London on entertainment, media and technology law, says the definition of “opt out” was initially vague, its inclusion makes the EU one of the most strict in terms of AI training.
There is a fear, however, that passing strict AI copyright regulations could result in a country missing the opportunity to establish itself as a next-generation Silicon Valley and reap the economic benefits that would follow. Russian President Vladimir Putin believes the stakes are even higher. In 2017, he stated that the nation that leads in AI “will be the ruler of the world.” The United Kingdom’s Intellectual Property Office seemed to be moving in that direction when it published a statement last summer recommending that text and data mining be exempt from opt-outs in hopes of becoming Europe’s haven for AI. In February, however, the British government put the brakes on the IPO’s proposal, leaving its future uncertain.
Lindevall and others in the music industry say they are fighting for even better standards. “We don’t want to opt out, we want to opt in,” she says. “Then we want a clear structure for remuneration.”
The lion’s share of U.S.-based music and entertainment organizations — more than 40, including ASCAP, BMI, RIAA, SESAC and the National Music Publisher’s Association — are in agreement and recently launched the Human Artistry Campaign, which established seven principles advocating AI’s best practices intended to protect creators’ copyrights. No. 4: “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation.”
Today, the idea that rights holders could one day license works for machine-learning still seems far off. Among the potential solutions for remuneration are blanket licenses something like the blank-tape levies used in parts of Europe. But given the patchwork of international law on this subject, and the complexities of tracking down and paying rights holders, some feel these fixes are not viable.
Dryhurst says he and the Spawning team are working on a concrete solution: an “opt in” tool. Stability AI has signed on as its first partner for this innovation, and Dryhurst says the newest version of its text-to-image AI software, Stable Diffusion 3, will not include any of the 78 million artworks that opted out prior to this advancement. “This is a win,” he says. “I am really hopeful others will follow suit.”
HipHopWired Featured Video
Source: Ubisoft / Tom Clancy’s The Division 2
Happy Division Day. Ubisoft is celebrating by giving long-time The Division players what they want, more content for Tom Clancy’s The Division 2.
Tom Clancy’s: The Division 2 was released in 2019 and is still pushing on four years later. During Ubisoft’s Division Day live stream, fans were treated to a look at the upcoming roadmap The Division 2’s Year 5 of content.
Related Stories
Year 5 of Division 2 will kick off in June, but players won’t have to wait that long to try some new content. Beginning April 21, players can sign up to participate in a public PC test server for the new mode Descent.
[embedded content]
From April 25 to May 9, players who log in during that time will also get the Resident Evil Leon Kennedy RPD outfit.
A Full Breakdown of Year 5’s Seasons
Broken Wings (June): This new season brings a new twist on Manhunts and introduces a new game mode titled Descent, a rogue-lite mode available for free to all Division 2 players when Year 5 begins. Players will also see the continuation of a multi-season rebuild of the Castle Settlement that will bring the devastated landmark back to life with a renewed purpose. As a part of the premium pass for Season One: Broken Wings, players will be able to unlock pieces of a Splinter Cell-themed outfit to help Fifth Freedom their way throughout DC and NYC.
Puppeteers: A challenging new Incursion will take Agents on a venture out to the Meret Estate for another confrontation with the Cleaners.
Vanguard: Agents will go back to New York City for the holidays and discover new revelations about Aaron Keener and his Rogues.
Black Diamond: New story DLC will be available that adds new zones, new main missions, and a whole new endgame structure.
Ubisoft says each of the four seasons will continue introducing Manhunts, Leagues, and Events. Players must own the Warlords of New York expansion to have access to the new content. Descent mode will be available to all Division 2 players.
You can watch the full stream below.
[embedded content]
—
Photo: Ubisoft / Tom Clancy’s The Division 2
Livestream concert start-up Mandolin shut down on Monday after its business struggled to survive following the resumption of in-person concerts.
Mandolin launched in the spring of 2020, alongside over two dozen similar platforms, when widespread COVID-19 lockdowns forced in-person experiences online. In 2021, the company raised $12 million from venture capital firms including Mark Benioff’s TIME Ventures to hire more staff to support its mobile version Mandolin Live+ and it to fund the acquisition of competitor NoonChorus.
Mandolin’s chief executive Mary Kay Huse, a former Salesforce executive, aimed for Mandolin Live+ to become a companion app to live concerts, allowing fans who couldn’t make it in person to watch the event live at home — just as fans of a sports team might, but with more interactive features. However, on Monday, the company said it was closing its doors and gave little context.
“We are sad to announce that after 3 incredible years of connecting artists and fans more authentically through digital experiences, we are officially closing down our product and business operations,” a statement on Mandolin’s website reads.
“We’d like to sincerely thank our clients and partners for their belief in our mission and the time spent helping us develop a platform that truly empowered artists to own their fan relationships. Though we can no longer lead the charge, we firmly believe market power will continue to shift toward better supporting artists in this endeavor and we are all so appreciative of the feedback, faith and validation you’ve provided over the years to get us this far.”