artificial intelligence
Page: 19
As the music industry grapples with the far-reaching implications of artificial intelligence, Warner Music Group CEO Robert Kyncl is being mindful of the opportunities it will create. “Framing it only as a threat is inaccurate,” he said on Tuesday (May 9) during the earnings call for the company’s second fiscal quarter ended March 31.
Kyncl’s tenure as chief business officer at YouTube informs his viewpoint on AI’s potential to contribute to the music industry’s growth. “When I arrived [at YouTube] in 2010, we were fighting many lawsuits around the world and were generating low tens of millions of dollars from [user-generated content],” he continued. “We turned that liability into a billion-dollar opportunity in a handful of years and multibillion-dollar revenue stream over time. In 2022, YouTube announced that it paid out over $2 billion from UGC to music rightsholders alone and far more across all content industries.”
Not that AI doesn’t pose challenges for owners of intellectual property. A wave of high-profile AI-generated songs — such as the “fake Drake”/The Weeknd track, “Heart on My Sleeve,” by an anonymous producer under the name Ghostwriter — has revealed how off-the-shelf generative AI technologies can easily replicate the sound and style of popular artists without their consent.
“Our first priority is to vigorously enforce our copyrights and our rights in name, image, likeness, and voice, to defend the originality of our artists and songwriters,” said Kyncl, echoing comments by Universal Music Group CEO Lucian Grainge in a letter sent to Spotify and other music streaming platforms in March. In that letter, Grainge said UMG “would not hesitate to take steps to protect our rights and those of our artists” against AI companies that use its intellectual property to “train” their AI.
“It is crucial that any AI generative platform discloses what their AI is trained on and this must happen all around the world,” Kyncl said on Tuesday. He pointed to the EU Artificial Intelligence Act — a proposed law that would establish government oversight and transparency requirements for AI systems — and efforts by U.S. Sen. Chuck Schumer in April to build “a flexible and resilient AI policy framework” to impose guardrails while allowing for innovation.
“I can promise you that whenever and wherever there is a legislative initiative on AI, we will be there in force to ensure that protection of intellectual property is high on the agenda,” Kyncl continued.
Kyncl went on to note that technological problems also require technological solutions. AI companies and distribution platforms can manage the proliferation of AI music by building new technologies for “identifying and tracking of content on consumption platforms that can appropriately identify copyright and remunerate copyright holders,” he continued.
Again, Kyncl’s employment at YouTube comes into play here. Prior to his arrival, the platform built a proprietary digital fingerprinting system, Content ID, to manage and monetize copyrighted material. In fact, one of Kyncl’s first hires as CEO of WMG, president of technology Ariel Bardin, is a former YouTube vp of product management who oversaw Content ID.
Labels are also attempting to rein in AI content by adopting “user-centric” royalty payment models that reward authentic, human-created recordings over mass-produced imitations. During UMG’s first quarter earnings call on April 26, Grainge said that “with the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of noise … eliminating unauthorized, unwanted and infringing content entirely.” WMG adopted user-centric (i.e. “fan-powered”) royalties on SoundCloud in 2022.
Britain’s competition watchdog said Thursday that it’s opening a review of the artificial intelligence market, focusing on the technology underpinning chatbots like ChatGPT.
The Competition Markets Authority said it will look into the opportunities and risks of AI as well as the competition rules and consumer protections that may be needed.
AI’s ability to mimic human behavior has dazzled users but also drawn attention from regulators and experts around the world concerned about its dangers as its use mushrooms — affecting jobs, copyright, education, privacy and many other parts of life.
The CEOs of Google, Microsoft and ChatGPT-maker OpenAI will meet Thursday with U.S. Vice President Kamala Harris for talks on how to ease the risks of their technology. And European Union negotiators are putting the finishing touches on sweeping new AI rules.
The U.K. watchdog said the goal of the review is to help guide the development of AI to ensure open and competitive markets that don’t end up being unfairly dominated by a few big players.
Artificial intelligence “has the potential to transform the way businesses compete as well as drive substantial economic growth,” CMA Chief Executive Sarah Cardell said. “It’s crucial that the potential benefits of this transformative technology are readily accessible to U.K. businesses and consumers while people remain protected from issues like false or misleading information.”
The authority will examine competition and barriers to entry in the development of foundation models. Also known as large language models, they’re a sub-category of general purpose AI that includes systems like ChatGPT.
The algorithms these models use are trained on vast pools of online information like blog posts and digital books to generate text and images that resemble human work, but they still face limitations including a tendency to fabricate information.
Right now, our artificial intelligence future sure seems to look a lot like… Wes Anderson movies! Over the past week, various AI programs have used the director’s quirky style to frame TikTok posts, rethink the looks of movies and even, more recently, make a trailer for a fictitious reboot of Star Wars. The future may be creepy, but at least it looks color-saturated and carefully composed.
The fake, fan-made Star Wars trailer, appropriately subtitled “The Galactic Menagerie,” is great fun, and its viral success shows both the strengths and current limitations of AI technology. Anderson’s distinctive visual style is an important part of his art, and the ostensible mission to “steal the Emperor’s artifact” sounds straight out of Star Wars. But the original Star Wars captured the imaginations of so many fans because it suggested a future that had some sand in its gears – the interstellar battle station had a trash compactor, and the spaceport cantina had a live band (and, one assumes, a public performance license).
Right now, at least, AI can’t seem to get past the surface.
“Heart on My Sleeve,” the so-called “Fake Drake” track apparently made with an artificial intelligence-generated version of Drake’s vocals, also sounds perfectly polished precisely in-tune and on-tempo. So do most modern pop songs, which tend to be pitch-corrected and sonically tweaked. (Most modern pop isn’t recorded live in a studio so much as assembled on a computer, so why shouldn’t it sound that way?) It’s hard to tell exactly why this style became so popular – the ease of smoothing over mistakes, the temptation of technical perfection, the sheer availability of samples and beats – but it’s what the mass streaming audience seems to want.
It’s also the kind of music that AI can most easily imitate. AI can already create pitch-perfect vocals, right-on-the-beat drumming, the kind of airless perfection of the Wes Anderson Star Wars trailer. It’s harder to learn a particular creator’s style – the phrasing and delivery that set singers apart as much as their voices do. So far, many of the songs online that have AI-generated voices seem to have put it on top of the old singer’s words, although most pop music is less about technical excellence than style of delivery. And quirks of timing and emphasis are even harder to imitate.
Most big new pop stars are short on quirks, but they might do well to develop them. Whatever laws and agreements eventually regulate AI – and it pains me to point out that the key word there is eventually – artists will still end up competing with algorithms. And since algorithms don’t need to eat or sleep, creators are going to have to do something that they can’t. One of those things, at least for now, is embracing a certain amount of imperfection. Computers will catch up, of course – if they can avoid mistakes, they can certainly learn to make a few – but that could take some time.
Until relatively recently, most great artists had quirks: Led Zeppelin drummer John Bonham played a bit behind the beat, Snoop Dogg started drawling out verses at a time when most rappers fired them off, and Willie Nelson has a sense of phrasing that owes more to jazz than rock. (Nelson’s timing is going to be hard for algorithms to imitate until they start smoking weed.) In most cases, these quirks are strengths – Bonham’s drumming made Zeppelin swing. But many producers came to see these kinds of imperfections as relics of an age when correcting them was difficult and the sound of pop changed so much that they now stick out like sore thumbs.
I don’t mean to romanticize the past. And newer artists have quirks, too – they just tend to smooth them over with studio software. But this kind of artificial perfection is easier to imitate. So, I wonder if the rise of AI – not the parodies we’re seeing so far, but the flood of computer-created pop that’s coming – will push musicians to embrace a rougher, messier aesthetic.
Most artists wouldn’t admit to this, of course – acknowledging commercial pressure is usually considered uncool. But big-picture shifts in the market have always shaped the sound of pop music. Consider how many artists created 35-to-45-minute albums in the ’60s and ’70s, and then 60-to-75-minute albums in the ’90s. Were they almost twice as inspired, or did the amount of music that fit on a CD – and the additional mechanical royalties they could make if they had songwriting credit – drive them to create more? These days, presumably also for economic reasons, songs are getting shorter and albums are getting longer.
It will be interesting to see if they also get a bit rougher, too. In Star Wars, at least, the future isn’t all about a sparkling surface.
For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.
Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.
Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.
There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.
The Guild of Music Supervisors’ ninth annual State of Music in Media conference is slated to take place on Saturday, Aug. 19 at The Los Angeles Film School in Hollywood, Calif.
The day-long conference, a collaboration with The Los Angeles Film School, will include panels exploring topics such as the emerging role of AI and other new technologies in the music industry, celebrating the 50th anniversary of hip-hop, and discussing the business and art of music supervision across all crafts.
“The Guild of Music Supervisors is thrilled to be hosting our ninth annual education conference once again at The Los Angeles Film school,” said Guild president Joel C. High in a statement. “It is always our ambition to bring the highest level of panels and discussion as a service to our members and friends. This year there are some critical topics that need to be brought to the community and we cannot wait for all to be involved.”
There are both in-person and virtual ticket options. The in-person event includes networking, a happy hour, educational panels, live musical performances and one-on-one mentoring sessions for aspiring music supervisors.
Members of the Guild of Music Supervisors and Friends of the Guild will receive a discount on their ticket purchases. Tickets are available to the public at full price and come with a complimentary one-year subscription as a Friend of the Guild.
Below is a pricing grid on the various ticket options. To learn more, visit https://www.gmsmediaconference.com. To purchase tickets, visit the guild’s ticketing page here.
Universal Music Group chairman/CEO Lucian Grainge took aim at artificial intelligence again on Wednesday (April 26), this time blaming AI for the “oversupply” of “bad” content on streaming platforms and pointing to user-centric payment models as the answer.
AI tools have exploded in popularity in recent months, and Grainge has been an outspoken critic of generative AI being used to mimic copyrighted works, as with the song “Heart on My Sleeve,” which used AI to generate vocals from UMG artists Drake and The Weeknd.
In fervent comments Grainge made during a call discussing UMG’s earnings Wednesday, the executive said AI significantly contributes to a glut of “poor-quality” content on streaming platforms, muddies search experiences for fans looking for their favorite artists and generally has “virtually no consumer appeal.”
“Any way you look at it, this oversupply, whether or not AI-created is, simply, bad. Bad for artists. Bad for fans. And bad for the platforms themselves,” Grainge said.
The head of the world’s largest music company specifically called out the role of generative AI platforms, which are “trained” to produce new creations after being fed vast quantities of existing works known as “inputs.” In the case of AI music platforms, that process involves huge numbers of songs, which many across the music industry argue infringes on artists’ and labels’ copyrights.
Grainge argued that “the flood of unwanted content” generated by AI could be reduced by adopting new payment models from streaming platforms. UMG is currently exploring “artist-centric” models with Tidal and Deezer, while SoundCloud and Warner Music Group also announced a partnership on so-called user-centric royalties last year.
“With the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of ‘noise’ … eliminating unauthorized, unwanted, and infringing content entirely,” Grainge said on Wednesday.
While UMG continues exploring alternative streaming payment models with partners Tidal, Deezer and others on what form alternative streaming payment models should take, an analyst on Wednesday’s call asked Grainge if, in the meantime, the company would ever consider licensing songs to an AI platform.
“We are open to licensing … but we have to respect our artist and the integrity of their work,” Grainge said. “We should be the hostess with the mostest. We’re open for business with businesses that are legitimate and (interested in) partnership for growth.”
A new song purportedly created with AI that mashes up Bad Bunny and Rihanna was uploaded on Tuesday (April 25), apparently by the same mystery maker behind the controversial AI Drake/Weeknd tune “Heart On My Sleeve” that led to immediate takedown notices earlier this month.
Uploaded directly to SoundCloud, YouTube and TikTok (but not Spotify, Apple Music or other major streaming services) under a different handle, Ghostwrider777, the 1:07 song, “Por Qué,” features a spot-on yearning Bad Bunny vocal over a shuffling reggaeton beat along with RihRih singing “Baby you shine so bright/ Like diamonds in the sky/ I feel alive/ But now you got me thinkin.’”
The message accompanying the clip reads, “they tried to shut us down,” and the video running underneath the song features a character in a white bucket hat, green ski goggles, winter gloves and a white sheet covering their face and upper body. As the song plays out, the on-screen graphics read: “i used AI to make a Bad Bunny song ft. Rihanna… this video will be removed in 24 hours… they tried to quiet my brother, but we will prevail.”
Representatives for Bad Bunny and Rihanna did not immediately respond to Billboard‘s requests for comment.
The Drake/Weeknd fake was quickly pulled from most streaming platforms after their label, Universal Music Group condemned it in a statement on April 17 that said the track was “infringing content created with generative AI.” That song was credited to the anonymous TikTok user Ghostwriter977 and credited to “Ghostwriter” on streaming platforms; at press time it was not clear if both songs were created by the same person. Before it was pulled, “Heart On My Sleeve” racked up millions of views and listens on Spotify, TikTok and YouTube, with the creator saying “This is just the beginning” in a comment beneath the YouTube clip.
The short-lived success of the song amplified a growing worry in the music industry over the potential impact of AI, even as a newly label-free Grimes mused this week about “killing copyright” by offering to split royalties 50/50 with fans who create a successful AI-generated song using her voice.
Click here to listen to “Por Qué.”
For the last week, the most talked-about song in the music business has been “Heart on My Sleeve,” the track said to have been created by using artificial intelligence to imitate vocals from Drake and The Weeknd and uploaded to TikTok by the user Ghostwriter977. And while most reactions were impressed, there was a big difference between those of fans (“This isn’t bad, which is pretty cool!”) and executives (“This isn’t bad, which is really scary!”). As with much online technology, however, what’s truly remarkable, and frightening, isn’t the quality – it’s the potential quantity.
This particular track didn’t do much damage. Streaming services pulled it down after receiving a request from Universal Music Group, for which both Drake and The Weeknd record. YouTube says the track was removed because of a copyright claim, and “Heart on My Sleeve” contains at least one obvious infringement in the form of a Metro Boomin producer tag. But it’s not as clear as creators and rightsholders might like that imitating Drake’s voice qualifies as copyright infringement.
In a statement released around the time the track was taken down, Universal said that “the training of generative AI using our artists’ music” violated copyright. But it’s a bit more complicated than that. Whether that’s true in the U.S. depends on whether training AI with music qualifies as fair use – which will not be clear until courts rule on the matter. Whether it’s true in other countries depends on local statutory exceptions for text and data mining that vary in every country. Either way, though, purposefully imitating Drake’s voice would almost certainly violate his right to what an American lawyer might call his right of publicity but a fan would more likely call his artistic identity. There are precedents for this: A court held that Frito-Lay violated the rights of Tom Waits by imitating his voice for a commercial, and Bette Midler won a similar lawsuit against Ford. Both of those cases involved an implied endorsement – the suggestion of approval where none existed.
The violation of an AI imitation is far more fundamental, though. The essence of Drake’s art – the essence of his Drakeness, if you will – is his voice. (That voice isn’t great by any technical definition, any more than Tom Waits’ is, but it’s a fundamental part of his creativity, even his very identity.) Imitating that is fair enough when it comes to parody – this video of takes on Bob Dylan‘s vocal style seems like it should be fair game because it’s commenting on Dylan instead of ripping him off – but creating a counterfeit Drake might be even more of a moral violation than a commercial one. Bad imitators may be tacky, but people tend to regard very accurate ones as spooky. “Heart on My Sleeve” isn’t Drake Lite so much as an early attempt at Drakenstein – interesting to look at, but fundamentally alarming in the way it imitates humanity. (Myths and stories return to this theme all the time, and it’s hard to think of many with happy endings.) Universal executives know that – they have talked internally about the coming challenges of AI for years – which is why the company’s comment asked stakeholders “which side of history” they want to be on.
This track is just the sign of a coming storm. The history of technology is filled with debates about when new forms of media and technology will surpass old ones in terms of quality when it often matters much more about how cheap and easy they are. No one thinks movies look better on a phone screen than in a theatre, but the device is right there in your hand. Travel agents might be better at booking flights than Expedia, but – well, the fact that there aren’t that many of them anymore makes my point. Here, the issue isn’t whether AI can make a Drake track better than Drake – which is actually impossible by definition, because a Drake track without Drake isn’t really a Drake track at all – but rather how much more productive AI can be than human artists, and what happens once it starts operating at scale.
Imagine the most prolific artist you can think of – say, an insomniac YoungBoy Never Broke Again crossed with King Gizzard & the Lizard Wizard. Then imagine that this hypothetical artist never needs to eat or sleep or do anything else that interferes with work. Then imagine that he – or, really, it – never varies from a proven commercial formula. Now clone that artist thousands of times. Because that’s the real threat of AI to the music business – not the quality that could arrive someday but the quantity that’s coming sooner than we can imagine.
It has been said that 100,000 tracks get uploaded to streaming services every day. What happens once algorithms can make pop music at scale? Will that turn into a million tracks a day? Or 100 million? Could music recorded by humans become an exception instead of a rule? In the immediate future, most of this music wouldn’t be very interesting – but the sheer scale and inevitable variety could cut into the revenue collected by creators and rightsholders. The music business doesn’t need an umbrella – it needs a flood barrier.
In the long run, that barrier should be legal – some combination of copyright, personality rights and unfair competition law. That will take time to build, though. For now, streaming services need to continue to work with creators and rightsholders to make clear the difference between artists and their artificial imitators.
Fans who want to hear Drake shouldn’t have to guess which songs are really his.
For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.
Spotify CEO Daniel Ek said Tuesday (April 25) that, contrary to the widespread backlash artificial intelligence (AI) tools have faced, he’s optimistic the technology could actually be a good thing for musicians and for Spotify.
While acknowledging the copyright infringement concerns presented by songs like the AI-generated Drake fake “Heart on My Sleeve” — which racked up 600,000 streams on Spotify before the platform took it down — in comments made on a Spotify conference call and podcast, Ek said AI tools could ease the learning curve for first-time music creators and spark a new era of artistic expression.
“On the positive side, this could be potentially huge for creativity,” Ek said on a conference call discussing the company’s first-quarter earnings. “That should lead to more music [which] we think is great culturally, but it also benefits Spotify because the more creators we have on our service the better it is and the more opportunity we have to grow engagement and revenue.”
Ek’s entrepreneurial confidence that AI can be an industry boon in certain instances stands in contrast to a steady campaign of condemnation for generative machine learning tools coming from Universal Music Group, the National Music Publishers’ Association (NMPA) and others.
At the same time, companies including Spotify, Warner Music Group, HYBE, ByteDance, SoundCloud and a host of start-ups have leaned in on the potential of AI, investing or partnering with machine learning companies.
The industry is still sorting the ways in which AI can be used and attempting to delineate between AI tools that are helpful and those that are potentially harmful. The use case presenting the most consternation uses a machine-learning process to identify patterns and characteristics in songs that make them irresistible and reproduce those patterns and characteristics in new creations.
Functional music — i.e., sounds designed to promote sleep, studying or relaxation — has become a fertile genre for AI, and playlists featuring AI-enhanced or generated music have racked up millions of followers on Spotify and other streaming services. This has led to concerns by some record executives who have noted that functional music eats into major-label market share.
For Spotify’s part, in February the platformSpotify launched an “AI DJ,” which uses AI technology to gin up song recommendations for premium subscribers based on their listening history, narrated by commentary delivered by an AI voice platform.
“I’m very familiar with the scary parts … the complete generative stuff or even the so-called deep fakes that pretend to be someone they’re not,” Ek said on Tuesday’s episode of Spotify’s For the Record podcast. “I choose to look at the glass as more half-full than half-empty. I think if it’s done right, these AIs will be incorporated into almost every product suite to enable creativity to be available to many more people around the world.”
Grimes loves to push the envelope. But after telling her fans that she’s down with “open sourcing all art and killing copyright” in a series of tweets on Sunday night, and offering to split royalties 50/50 with any successful AI-generated song that uses her voice,” the no rules singer realized she might need some guardrails after all.
“Ok hate this part but we may do copyright takedowns ONLY for rly rly toxic lyrics w Grimes voice,” she tweeted on Monday afternoon (April 24). “imo you’d rly have to push it for me to wanna take smthn down but I guess plz don’t be *the worst*. as in, try not to exit the current Overton window of lyrical content w regards to sex/violence. Like no baby murder songs plz.”
The mother of two with ex Elon Musk then went further, openly debating with herself whether issuing takedown notices after making the open call for facsimile Grimes songs with no limits would make her a hypocrite. “I think I’m Streisand effecting this now but I don’t wanna have to issue a takedown and be a hypocrite later,” she said in reference to an attempt to censor or hide a piece of information that only serves to further shine a spotlight on it.
“***That’s the only rule. Rly don’t like to do a rule but don’t wanna be responsible for a Nazi anthem unless it’s somehow in jest a la producers I guess,” she added in a nod to the 1967 Mel Brooks satirical comedy, The Producers, about the staging a Nazi musical. (Grimes admitted in a later tweet that she has never seen The Producers and that the plan was to wing it and “send takedown notices to scary stuff,” before adding that she’s not even sure her team is capable of sending takedown notices.)
“wud prefer avoiding political stuff but If it’s a small meme with ur friends we prob won’t penalize that. Probably just if smthn is viral and anti abortion or smthn like that,” she said, reiterating that she really doesn’t like adding rules after the fact and apologizing and saying “but this is the only thing.”
When a commenter said it sounded like Grimes was definitely “streisand-ing this situation,” she responded, “Yes but I gotta say it. And if it’s a meme to make awful grimes songs it’ll prob be a week of hard work for us but not a boring outcome. I imagine the DAN// Sidney Bing going murderous equivalent will have to happen with vocal deepfakes and I’m entertained if that happens to us.”
Another commenter noted that the potential for offensive or gross posts “should’ve been their [Grimes’ team’s] first thought,” which the singer said it actually was. “I just didn’t think the original post abt ai wud be a thing, like it was sort of a casual post so my poor team is just catching up with now having to organize all this,” she said.
In a tweet referencing this weekend’s disastrous roll-up of non-paying blue checkmarks on Musk’s Twitter — which was chaotic and later reversed in part for some well-known users who adamantly refused to pay for their legacy checkmarks — a user joked that Grimes had “learned from Elon! Twitter announcement first, then let’s figure out the details after.”
Grimes turned what could have been a diss into a positive, noting, “In my defense this has always been a Grimes feature too.”
The back-and-forth continued when a user said even with takedowns Grimes could still end up in an “uncomfortable situation” where an offensive song could still be out in the world “misleading people until the end of time,” as things tend to do on the internet.
Her response to that one was classic Grimes : “We expect a certain amount of chaos,” she said. “Grimes is an art project, not a music project. The ultimate goal has always been to push boundaries rather than have a nice song. The point is to poke holes in the simulation and see what happens even if it’s a bad outcome for us.”
Most importantly, fans wanted to know when the software will be available for other artists to try it out with their voices. The good news, according to Grimes, is that it’s already out there and she was busy collecting resources. In even better news, she also told her followers that she has “lots of real Grimes songs ready to go too.”
Fans have been eagerly awaiting any news about Grimes’ next album, the as-yet-unscheduled BOOK 1, after she recently said that “music is my side quest now. Tbh reduced pressure x increased freedom = prob more music just ideally ‘Low key I’ll always do my best to entertain whilst depleting my literal reputation I hope that’s ok I love y’all.”
The musician’s most recent album was 2020’s Miss Anthropocene, which included the singles “Violence,” “So Heavy I Fell Through the Earth,” “My Name Is Dark” and “Delete Forever.” Since then, she’s also released one-off songs including 2021’s “Player of Games” and last year’s “Shinigami Eyes.”
Grimes’ AI tease came a week after a fake song featuring A.I.-generated vocals from Drake and The Weeknd, “Heart on My Sleeve,” was pulled from streaming services after going viral.
“I’ll split 50% royalties on any successful AI generated song that uses my voice,” she promised while announcing the AI project, a stance that was in stark opposition to Universal Music Group, which acted quickly to condemn the “infringing content created with generative AI” that produced the phony superstar duet.
Check out Grimes’ tweets below.
Ok hate this part but we may do copyright takedowns ONLY for rly rly toxic lyrics w grimes voice: imo you’d rly have to push it for me to wanna take smthn down but I guess plz don’t be *the worst*. as in, try not to exit the current Overton window of lyrical content w regards to…— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
Yes but I gotta say it. And if it’s a meme to make awful grimes songs it’ll prob be a week of hard work for us but not a boring outcome. I imagine the DAN// Sidney Bing going murderous equivalent will have to happen with vocal deepfakes and I’m entertained if that happens to us— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
This was their first thought haha – I just didn’t think the original post abt ai wud be a thing, like it was sort of a casual post so my poor team is just catching up with now having to organize all this— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
In my defense this has always been a grimes feature too— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
Oh, I never saw the producers – I guess we just play it by ear and send takedown notices to scary stuff. I’m not sure we can even send takedown notices tbh. Like curious what the actual legality is, i think I chose not to copyright my name and likeness back when that was a…— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023
We expect a certain amount of chaos. grimes is an art project, not a music project. The ultimate goal has always been to push boundaries rather than have a nice song. The point is to poke holes in the simulation and see what happens even if it’s a bad outcome for us https://t.co/RSAW4xQCAi— 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) April 24, 2023