tech
Page: 60
A U.S. senator representing Music City had tough questions about artificial intelligence’s impact on the music industry during a Congressional hearing on Tuesday, at one point asking the CEO of the company behind ChatGPT to commit to not using copyrighted songs to train future machines.
At a hearing before the Senate Judiciary Committee about potential regulation for AI, Sen. Marsha Blackburn (R-Tenn.) repeatedly grilled Sam Altman, CEO of OpenAI, over how songwriters and musical artists should be compensated when their works are used by AI companies.
Opening her questioning, Blackburn said she had used OpenAI’s Jukebox to create a song that mimicked Garth Brooks – and that she was clearly concerned about how the singer’s music and voice had been used to create such a tool.
“You’re training it on these copyrighted songs,” Blackburn told Altman. “How do you compensate the artist?”
“If I can go in and say ‘write me a song that sounds like Garth Brooks,’ and it takes part of an existing song, there has to be compensation to that artist for that utilization and that use,” Blackburn said. “If it was radio play, it would be there. If it was streaming, it would be there.”
At one point, Blackburn demanded a firm answer: “Can you commit, as you’ve done with consumer data, not to train [AI models] on artists’ and songwriters’ copyrighted works, or use their voices and their likenesses without first receiving their consent?”
Though Altman did not directly answer that question, he repeatedly told the senator that artists “deserve control” over how their copyrighted music and their voices were used by AI companies.
“We think that content creators need to benefit from this technology,” Altman told the committee. “Exactly what the economic model is, we’re still talking to artists and content owners about what they want. I think there’s a lot of ways this can happen. But very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology.”
Blackburn’s questioning came amid a far broader discussion of the potential risks posed by AI, including existential threats to democracy, major harm to the labor market, and the widespread proliferation of misinformation. One witness, a New York University professor and expert in artificial intelligence, told the lawmakers that it poses problems “on a scale that humanity has not seen before.”
The music industry, too, is worried about AI-driven disruption. Last month, a new song featuring AI-generated fake vocals from Drake and The Weeknd went viral, underscoring growing concerns about AI’s impact on music and highlighting the legal uncertainties that surround it.
One of the biggest open questions is over whether copyrighted music can be used to train AI platforms – the process whereby machines “learn” to spit out new creations by ingesting millions of existing works. Major labels and other industry players have already said that such training is illegal, and cutting-edge litigation against the creators of such platforms could be coming soon.
At Tuesday’s hearing, in repeatedly asking Altman to weigh in on that question, Blackburn drew historical parallels to the last major technological disruption to wreak havoc on the music industry — a scenario that also posed novel legal and policy questions.
“We lived through Napster,” Blackburn said. “That was something that really cost a lot of artists a lot of money.”
Though he voiced support for compensation for artists, Altman did not get into specifics, saying that many industry stakeholders had “different opinions” on how creators should be paid. When Blackburn asked him if he thought the government should create an organization similar to SoundExchange – the group that collects certain blanket royalties for streaming – Altman said he wasn’t familiar with it.
“You’ve got your team behind you,” Blackburn said. “Get back to me on that.”
In 1994, at the dawn of the internet era, Rolling Stone asked Steve Jobs if he still had faith in technology. “It’s not a faith in technology,” he responded. “It’s faith in people.”
Today, at the dawn of the artificial intelligence era, we put our faith in people too.
It’s hard to think of an issue that has exploded onto the public scene with the furor of the debate over AI, which went from obscure technology journals to national morning shows practically overnight. This week, Congress is convening the first two of what will surely be many hearings on the issue, including one with OpenAI CEO Sam Altman and another with musician, voice actor and SAG-AFTRA National Board member Dan Navarro.
As members of the global Human Artistry Campaign, made up of more than 100 organizations that represent a united, worldwide coalition of the creative arts, we welcome this open and active debate. It’s gratifying to see policymakers, industry, and our own creative community asking tough questions up front. It’s a lot easier to chart a course in advance than to play catch up from afterward.
We don’t have long to get this right, either. The internet is already awash in unlicensed and unethical “style” and “soundalike” tools that rip off the writing, voice, likeness and style of professional artists and songwriters without authorization or permission. Powerful new engines like OpenAI’s ChatGPT and Jukebox, Google’s MusicLM and Microsoft’s AI-powered Bing have been trained on vast troves of musical compositions, lyrics, and sound recordings — as well as every other type of data and information available on the internet — without even the most basic transparency or disclosure, let alone consent from the creators whose work is being used. Songwriters, recording artists, and musicians today are literally being forced to compete against AI programs trained on copies of their own compositions and recordings.
RIAA Chairman/CEO Mitch Glazier
Othello Banaci
We strongly support AI that can be used to enhance art and stretch the potential of human creativity even further. Technology has always pushed art forward, and AI will be no different.
At the same time, however, human artistry must and will always remain at the core of genuine creation. The basis of creative expression is the sharing of lived experiences — an artist-to-audience/audience-to-artist connection that forms our culture and identity.
Without a rich supply of human-created works, there would be nothing on which to train AI in the first place. And if we don’t lay down a policy foundation now that respects, values and compensates the unique genius of human creators, we will end up in a cultural cul-de-sac, feeding AI-generated works back into the engines that produced them in a costly and ultimately empty race to the artistic bottom.
That policy foundation must start with the core value of consent. Use of copyrighted works to train or develop AI must be subject to free-market licensing and authorization from all rights holders. Creators and copyright owners must retain exclusive control over the ways their work is used. The moral invasion of AI engines that steal the core of a professional performer’s identity — the product of a lifetime’s hard work and dedication — without permission or pay cannot be tolerated.
David Israelite
Courtesy of NMPA
This will require AI developers to ensure copyrighted training inputs are approved and licensed, including those used by pre-trained AIs they employ. It means they need to keep thorough and transparent records of the creative works and likenesses used to train AI systems and how they were exploited. These obligations are nothing new, though — anyone who uses another creator’s work or a professional’s voice, image or likeness must already ensure they have the necessary rights and maintain the records to prove it.
Congress is right to bring in AI developers like Sam Altman to hear the technology community’s vision for the future of AI and explore the safeguards and guardrails the industry is relying on today. The issues around the rapid deployment of novel AI capabilities are numerous and profound: data privacy, deepfakes, bias and misinformation in training sets, job displacement and national security.
Creators will be watching and listening closely for concrete, meaningful commitments to the core principles of permission and fair market licensing that are necessary to sustain songwriters and recording artists and drive innovation.
We have already seen some of what AI can do. Now it falls to us to insist that it be done in ethical and lawful ways. Nothing short of our culture — and, over time, our very humanity — is at stake.
David Israelite is the President & CEO of the National Music Publishers’ Association. NMPA is the trade association representing American music publishers and their songwriting partners.
Mitch Glazier is chairman/CEO of the RIAA, the trade organization that supports and promotes the creative and financial vitality of the major recorded-music companies.
As a musician, educator, and author, I’ve spent the last few years examining AI’s challenges to the music ecosystem. But recently, after a comical misunderstanding on a U.S. podcast, I ended up playing devil’s advocate for the AI side of the AI and music equation. The experience was thought-provoking as I took on the role of an accidental AI evangelist, and I started to refocus on the question of, “Why are we fighting for ethical use of AI in music in the first place? What are the benefits, and are they worth the time and effort?”
As we hurtle from the now-quaint AI chatbot ChatGPT, to the expected text-to-video and text-to-music capabilities of GPT 5 (rumoured to drop in December), to Microsoft’s acknowledgment that AGI is feasible (artificial general intelligence, or a sentient AI being, or Skynet to be alarmist), to viral AI-generated hits with vocals in the style of Drake & The Weeknd and Bad Bunny & Rihanna, it can be easy to focus on the doom and gloom of AI. However, doing so does us a disservice, as it shifts the conversation from “how do we harness AI’s benefits ethically” to “how do we stop AI from destroying the world?” There are many, with a cheeky chappie nod to Ian Dury, “Reasons to Be Cheerful” (or at least not fearful) about music and AI. Here are nine reasons why we need to embrace it with guardrails, rather than throwing the baby out with the bathwater.
Fun: Yes, damn the ethics – temporarily. Generative AI technologies, which ingest content and then create new content based on those inputs, are incredibly fun. They tease and capture our curiosity, drawing us in. We might tell our employers that we use text-to-image services like DALL-E and Stable Diffusion or chat and search bots like ChatGPT and Jasper to optimise workflow to stay ahead of the technological curve, but they are also so seductively entertaining. Elementary AI prohibition won’t work; our solutions must be at least as stimulating as our challenges.
Time-saving: According to a survey by Water & Music, this is what music makers desire most. Many musicians spend countless hours managing social media accounts, wishing they could focus on music-making instead. AI solutions promise to grant that wish, allowing them to auto-generate text for social posts and announcements, and providing inspiration and potential starting points for new tracks, giving them the gift of time and helping them write, record, and release music more quickly. Let’s use automation wisely to free up musicians for their art and their economics.
Education: Despite dwindling funds for music education, technology offers new ways to make music accessible to everyone. Affordable AI tools can help students break through privileged barriers, providing access to personalised learning. I work with Ableton, which makes a variety of music production hardware and software. Successful initiatives such as the Ableton Push 1 campaign, which provided discounts to those who traded in their Push 1 midi controller and then refurbished and provided them for free to schools that needed them, demonstrate how digital tools can empower the economically marginalised, enable them to explore new musical styles and techniques, and nurture their passion for music.
Imperfect charm: AI’s imperfections and quirks make it endearing and relatable. AI’s unpredictable nature can lead to happy accidents and repurposing technologies for new musical uses. The fact that LMMs (large language models), which analyze huge swaths of text and then generate new text based on the patterns it learns, can be flawed retains a touch of human magic and humour in our interactions with them. Let’s enjoy this fleeting VHS fuzziness before it’s gone.
Affordable: Setting aside the environmental costs for a moment, AI has become exponentially accessible. AI allows creators to produce incredible results with basic tools. Last July, I purchased an expensive GPU-filled MacBook with dreams of making mind-blowing AI musical creations, but by September, I was doing just that using only my old phone’s browser. This so-called “democratisation” of music production can level the playing field for musicians worldwide, allowing more people to pursue their passion. Can we get it to increase their income too?
Tech Stacking: Experimenting with new combinations of generative AI APIs (application programming interfaces) opens up a world of DIY creativity. APIs are essentially pre-made functionality that developers can slot into their code easily, allowing them to focus on innovation rather than spending their time creating generative AI applications from scratch. This collision of technologies can encourage collaboration between musicians and developers, fostering a dynamic and innovative environment that crucially must be aligned with new licensing and rights payment models.
Elevated Chatter: As AI becomes more prevalent, the quality of conversations surrounding it has improved. People are starting to debate the legality of using copyrighted material to train AI, particularly in the music world, with a variety of strong arguments being made on behalf of human creators. In my research, I tried to address the complexities of AI in the music ecosystem, and now, I find these discussions happening everywhere, from John Oliver to barber shops. This elevated discourse can help us as a global industry make informed and necessarily swift decisions about AI’s role in our lives, better integrating its reasonably cheerful benefits and not being overwhelmed by its many chilling implications.
Inspiring the next generation: Introducing AI to young minds can be inspiring and terrifying. In my undergraduate module at Windmill Studios Dublin, I tasked students with inventing new music IP using existing cutting-edge technologies, with one rule of thumb: they could use a hologram but not bring someone back from the dead. Initially, I felt terrible about presenting such a potentially dystopian vision to young minds. But what happened next amazed me: all 40-odd students (from two classes) came up with outstanding commercial ideas. Their creativity and enthusiasm reminded me that the adage, “no one knows anything,” holds as true for music as it ever did.
Time to adapt: Perhaps the biggest reason to be cheerful at the moment, we still have enough time to address AI’s challenges. As Microsoft announces the “first sparks of AGI” and we face a saturated streaming market, we must work quickly together to ensure an equitable future for music makers. In my book, “Artificial Intelligence and Music Ecosystem,” my fellow contributors and I delve into the pressing issues surrounding AI, and now, more than ever, we need to take action to steer the course of music’s evolution. As AI continues to develop, it’s crucial for musicians, industry professionals, and policymakers to engage in open dialogue, collaborating to create a sustainable and equitable music ecosystem. Otherwise, what looks like Drake and sounds like Drake may not actually be Drake in the future.
The Human Artistry Campaign is the first step in this direction and interlinks with the concerns of MIT’s Max Tegmark’s “Pause Giant AI Experiments Open Letter” (which currently has 27,000+ signatures and called for a pause in the development of AI to give governments the chance to regulate it) for a big-vision picture. As we work together, we can ensure that AI serves as a tool for artistic growth and sustainability. But we must move fast and nurture things as AI’s growth is exponential. Do you remember back when ChatGPT was on the cutting edge of AI? Well, that was only five months ago, and we’re now talking about the possibility of sentience. That’s why I am working with some of the biggest companies in music and big tech to create AI:OK, a system to identify ethical stakeholders and help to create an equitable AI music ecosystem. If you are human and would like to help shape this future, it might be the most significant reason to be cheerful of all.
Dr. Martin Clancy, PhD is a musician, author, artist manager and AI expert. He is a founding member of Irish rock band In Tua Nua, chairs the IEEE Global AI Ethics Arts Committee, serves as an IRC Research Fellow at Trinity College Dublin, and authored the book Artificial Intelligence and Music Ecosystem. He also manages Irish singer Jack Lukeman and is a leading voice in advocating ethical use of AI in music.
HipHopWired Featured Video
CLOSE
Source: Nintendo / The Legend of Zelda Tears of The Kingdom
Is the Game of The Year competition already over? Many critics and gamers believe so with the arrival of The Legend of Zelda: Tears of the Kingdom.
Today is a big day for Nintendo. Reviews have officially dropped for Tears of The Kingdom, and it is no surprise that it keeps the same energy as Breath of The Wild and then some.
Right now, on OpenCritic, the game is sitting at a mighty 97 rating based on 62 critic reviews, earning the title of the “best-reviewed game on the website. On Metacritic, a 96. Tom Marks of IGN gave Tears of The Kingdom a perfect score, writing in his review, “The Legend of Zelda: Tears of the Kingdom is an unfathomable follow-up, expanding a world that already felt full beyond expectation and raising the bar ever higher into the clouds.”
“An excellent sequel and one of the best Zelda games ever made. A follow-up that builds upon and refines the achievements of the original while adding many new and equally innovative ideas of its own,” GameCentral said in its review, where they also gave the game a perfect score.
“Tears of the Kingdom is a triumph of open-ended game design that pays homage to the best parts of the Zelda franchise’s own storied history–and sometimes exceeds them,” Steven Watts wrote in his review for GameSpot.
Is The Game of The Year Debate Already Over?
The ridiculously high reviews for the game have gamers and critics saying Tears of The Kingdom is a lock for Game of The Year honors. That is a safe bet because Breath of The Wild was met with similar praise when it hit the Nintendo Switch in 2017.
The Legend of Zelda: Tears of The Kingdom arrives on Nintendo Switch consoles on May 12. You can see more reactions about the GOTY lock in the gallery below.
—
Photo: Nintendo / The Legend of Zelda Tears of The Kingdom
1. Welp, that settles it.
What if we had the power to bring back the dead? As far as recordings are concerned, we might be getting pretty close.
The viral success of the so-called “Fake Drake” track “Heart on My Sleeve,” which apparently employed AI technology to create realistic renderings of vocals from Drake and The Weeknd without their knowledge, has raised the possibility that perhaps any voice can now be imitated by AI, even artists who died decades ago.
Last week, producer Timbaland did just that. “I always wanted to work with BIG, and I never got a chance to. Until today…” he said in an Instagram Reel, pressing play on an unreleased song clip that sounds like Notorious BIG, rapping on top of a Timbaland beat, despite the fact that the rapper was murdered in a drive-by shooting 25 years prior. (A representative for Timbaland did not respond to Billboard’s request for comment. A representative for Notorious BIG’s estate declined to comment).
But this is not the first time a deceased stars’ voice has been resurrected with AI. The HYBE-owned AI voice synthesis company Supertone recreated the voice of late-South Korean folk artist Kim Kwang-seok last year, and in November, Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui. To see more even examples of this technology applied to late American singers, take a few minutes on TikTok, searching phrases like “Kurt Cobain AI cover” or “XXXTentacion AI voice.”
Some artists – like Grimes and Holly Herndon – have embraced the idea of this vocal recreation technology, finding innovative ways to grant fans access to their voices while maintaining some control through their own AI models, but other artists are showing signs that they will resist this, fearing that the technology could lead to confusion over which songs they actually recorded. There is also fear that fans will put words into artists’ mouths, making them voice phrases and opinions that they would never say IRL. Even Grimes admitted on Twitter there is the possibility that people will use her voice to say “rly rly toxic lyrics” or “nazi stuff” – and said she’d take those songs down.
In the case of artists like Notorious BIG or Kurt Cobain, who both passed away when the internet was still something you had to dial-up, it’s impossible to know where they might stand on this next-gen technology. Still, their voices are being resurrected through AI, and it seems these vocals are getting more realistic by the day.
It calls to mind the uncanny valley nature of the Tupac hologram which debuted at Coachella in 2012, or even the proliferation of posthumous albums in more recent years, which are especially common to see from artists who passed away suddenly at a young age, like Juice WRLD, Lil Peep, and Mac Miller.
Tyler, the Creator has voiced what many others have felt about the posthumous album trend. At an April 26 concert in Los Angeles, he noted that he’s written it into his will that he does not want any unreleased music put out after his death. “That’s f-cking gross,” he said. “Like, half-ass ideas and some random feature on it…like no.” It remains unclear if Tyler’s dying wishes would be honored when that time comes, however. Labels often own every song recorded during the term of their contract with an artist, so there is financial incentive for labels to release those unheard records.
Some who look at this optimistically liken the ability to render an artists’ voice onto a cover or original track as an emerging, novel form of fan engagement, similar to remixing, sampling, or even writing fan fiction. Similar to where this new technology seems to be headed, remixes and samples also both started as unsanctioned creations. Those reworkings were often less about making songs that would go toe-to-toe with the original artists’ catalog on the Billboard charts than it was about creativity and playfulness. Of course, there were plenty of legal issues that came along with the emergence of both remixing and sampling.
The legality of bringing artists’ voices back from the grave specifically is also still somewhat unclear. A celebrity’s voice may be covered by “right of publicity” laws which can protect them from having their voices commercially exploited without authorization. However, publicity rights post-mortem can be limited. “There’s no federal rights of publicity statute, just a hodgepodge of different state laws,” says Josh Love, partner at Reed Smith. He explains that depending on where the artist was domiciled at the time of their death, their estate may not possess any rights of publicity, but in states like California, there can be strong protection after death.
Another potential safeguard is the Lanham Act – which prohibits the use of any symbol or device that is likely to deceive consumers about the association, sponsorship, or approval of goods and services — though it may be less of a potent argument post-mortem. But most cases in which rights of publicity or the Lanham Act were used to protect a musician’s voice – like Tom Waits v. Frito Lay and Bette Midler v. Ford – were clear examples of voices being appropriated for commercial use. Creative works, like songs, are much more likely to be deemed a protected form of free speech.
Some believe this could be a particularly interesting new path for reviving older catalogs, especially when the artist is not alive to take part in any more promotion, for the estates and rights holders who control the artists’ likeness. As Zach Katz, president and COO of FaZe Clan and former president of BMG US, put it in a recent press release for voice mapping service Covers.ai: “AI will open a new, great opportunity for more legacy artists and catalogs to have their ‘Kate Bush’ or “Fleetwood Mac’ moment,” he said. “We are living in a remix culture and the whole fan-music movement is overdue to arrive in the industry.”
Though Covers.ai, created by start-up MAYK, was only just released to the public today, May 10, the company announced that it already amassed over 100,000 sign ups for the service leading up to its launch, proving that there is a strong appetite for this technology. With Covers.ai, users can upload original songs and map someone else’s voice on top of it, and the company says it is working to partner with the music business to license and pay for these voices. Its co-founder and CEO, Stefan Heinrich, says this idea is especially popular so far with Gen Z and Gen Alpha, “the product we’re building here is really made for the next generation, the one coming up.”
Between Supertone, Lingyin Engine, Covers.ai, and others competitors like Uberduck coming into the marketplace, it seems the popularization of these AI voice synthesizers is inevitable (albeit legally uncertain) but playing with dead artists’ voices adds another layer of moral complexity to the discussion: is this more akin to paying respects or grave robbing?
MAYK’s artificial intelligence-powered voice recreation tool officially launched to all users today (May 10).
Covers.ai lets users upload their own original songs and then try on other AI-voices on top of it, including the voices of Billboard Hot 100-charting talent. According to a company press release, Covers.ai’s tool topped over 100,000 sign-ups prior to its launch.
Its founder and CEO, Stefan Heinrich — an entrepreneur who previously worked in high-ranking positions for Cameo, TikTok, Musical.ly and YouTube — explains that, for now, most of the models available for users to work with are “community models.”
“This is open source,” he explains. “There are users that make these models with various celebrity voices out in the wild, and those can be uploaded and marked at ‘community models’ on our site. At the same time, we are working with artist teams to license the voices of specific talent so we can find a way to compensate them for their official use.”
Eventually, Heinrich says he also hopes to find a way to license song catalogs from rights holders so that users can mix and match tracks with various artists’ voices they find on the site. Through these licensing agreements, he hopes to find a way to create a new revenue stream for talent, but to date, these licenses have not yet been finalized.
MAYK is backed by notable music investors including Zach Katz (president/COO of FaZe Clan, former president of BMG US), Matt Pincus (co-founder and CEO of MUSIC), Jon Vlassopulos (CEO of Napster, former global head of music at Roblox), Mohnish Sani (principle, content acquisition, Amazon Music) and more.
The launch arrives as conversations around AI and vocal deepfakes are at a fever pitch. Just last month, an unknown songwriter called Ghostwriter went viral for creating a song called “Heart on My Sleeve” using supposed AI-renderings of Drake and The Weeknd’s voices without their knowledge. Soon after, Grimes responded to the news by launching her own AI voice model to let users freely use her voice to create music.
In just a few minutes of searching, it’s apparent that TikTok is already flooded with songs with AI-vocals, whether they are original songs employing the voices of famous talent, like “Heart on My Sleeve,” or mashing up one well-known song with the voice of a different artist.
This AI vocal technology raises legal questions, however.
Mimicking vocals may be a violation an artist’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Past landmark cases — like Tom Waits v. Frito Lay and Bette Midler v. Ford Motor Company — have established that soundalikes of famous voices cannot be employed without their consent to sell products, but the precedent is less clear when it comes to creative expressions like songs, which are much more likely to be deemed a protected form of free speech.
Heinrich hopes that Covers.ai can help “democratize creativity” and make it far more “playful” in an effort to get music fans from the lean-back forms of music discovery, like listening to radio or a pre-programmed editorial playlist, to a more engaged, interactive experience. “I think what music is really changing right now,” he says, noting that Covers.ai’s earliest adopters are mostly Gen Z and Gen Alpha. “The product we’re building here is really made for the next generation, the one coming up.”
As the music industry grapples with the far-reaching implications of artificial intelligence, Warner Music Group CEO Robert Kyncl is being mindful of the opportunities it will create. “Framing it only as a threat is inaccurate,” he said on Tuesday (May 9) during the earnings call for the company’s second fiscal quarter ended March 31.
Kyncl’s tenure as chief business officer at YouTube informs his viewpoint on AI’s potential to contribute to the music industry’s growth. “When I arrived [at YouTube] in 2010, we were fighting many lawsuits around the world and were generating low tens of millions of dollars from [user-generated content],” he continued. “We turned that liability into a billion-dollar opportunity in a handful of years and multibillion-dollar revenue stream over time. In 2022, YouTube announced that it paid out over $2 billion from UGC to music rightsholders alone and far more across all content industries.”
Not that AI doesn’t pose challenges for owners of intellectual property. A wave of high-profile AI-generated songs — such as the “fake Drake”/The Weeknd track, “Heart on My Sleeve,” by an anonymous producer under the name Ghostwriter — has revealed how off-the-shelf generative AI technologies can easily replicate the sound and style of popular artists without their consent.
“Our first priority is to vigorously enforce our copyrights and our rights in name, image, likeness, and voice, to defend the originality of our artists and songwriters,” said Kyncl, echoing comments by Universal Music Group CEO Lucian Grainge in a letter sent to Spotify and other music streaming platforms in March. In that letter, Grainge said UMG “would not hesitate to take steps to protect our rights and those of our artists” against AI companies that use its intellectual property to “train” their AI.
“It is crucial that any AI generative platform discloses what their AI is trained on and this must happen all around the world,” Kyncl said on Tuesday. He pointed to the EU Artificial Intelligence Act — a proposed law that would establish government oversight and transparency requirements for AI systems — and efforts by U.S. Sen. Chuck Schumer in April to build “a flexible and resilient AI policy framework” to impose guardrails while allowing for innovation.
“I can promise you that whenever and wherever there is a legislative initiative on AI, we will be there in force to ensure that protection of intellectual property is high on the agenda,” Kyncl continued.
Kyncl went on to note that technological problems also require technological solutions. AI companies and distribution platforms can manage the proliferation of AI music by building new technologies for “identifying and tracking of content on consumption platforms that can appropriately identify copyright and remunerate copyright holders,” he continued.
Again, Kyncl’s employment at YouTube comes into play here. Prior to his arrival, the platform built a proprietary digital fingerprinting system, Content ID, to manage and monetize copyrighted material. In fact, one of Kyncl’s first hires as CEO of WMG, president of technology Ariel Bardin, is a former YouTube vp of product management who oversaw Content ID.
Labels are also attempting to rein in AI content by adopting “user-centric” royalty payment models that reward authentic, human-created recordings over mass-produced imitations. During UMG’s first quarter earnings call on April 26, Grainge said that “with the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of noise … eliminating unauthorized, unwanted and infringing content entirely.” WMG adopted user-centric (i.e. “fan-powered”) royalties on SoundCloud in 2022.
HipHopWired Featured Video
Source: Christopher Furlong / Getty / Twitter
If you haven’t used your Twitter account in a while and plan on keeping it, you better log on now because Elon Musk wants to get rid of it.
Elon Musk said on his janky Twitter app to warn his followers and other users, “You will probably see follower count drop” after revealing that the platform will begin “purging” accounts that “had no activity at all for several years.”
The announcement didn’t reveal an exact date for this purge, but you can bet one is on the way from Twitter in the form of a blog post.
Musk’s tweet about inactive accounts comes after reportedly threatening to reassign NPR’s Twitter handle after the news outlet ditched Twitter for other social media platforms after labeling NPR “state-affiliated media,” putting it in the same category as Russia’s RT.
Per Engadget’s reporting, Musk told NPR in an email exchange that it’s Twitter’s policy to “recycle handles that are definitively dormant,” and the “same policy applies to all accounts.”
Elon Musk Says His Company Will Archive Abandoned Accounts
Responding to his tweet, a paid subscriber to Musk’s profile “strongly” urged Musk not to purge inactive accounts. “Deleting the output of inactive accounts would be terrible. I still see people liking ten year old tweets I made, but the threads are already often fragmented with deleted or unavailable tweets. Don’t make it worse!”
Musk replied to the user by revealing his platform would archive the “abandoned” accounts.
Currently, the platform’s policy page on inactive accounts says to “log in at least every 30 days,” adding it will delete accounts due to inactivity.
We shall see if Musk keeps his word on this policy update.
—
Photo: Christopher Furlong / Getty
HipHopWired Featured Video
Source: Jerod Harris / Getty / Snoop Dogg
Snoop Dogg wants to know, “Where the f*ck is the money?”
The WGA is currently in the streets demanding better pay and other assurances from Hollywood, and Snoop Dogg wants the same for music artists.
Speaking on a panel earlier this week alongside his business partner and former Apple Music executive and Gamma founder Larry Johnson, plus Variety‘s Shirley Halperin, the West Coast rapper urged artists to boycott streaming services for being stingy with the coins, and they can take some lesson from the writers to make that happen.
“[Artists] need to figure it out the same way the writers are figuring it out,” the “Gin & Juice” crafter said. “The writers are striking because [of] streaming, they can’t get paid. Because when it’s on the platform, it’s not like in the box office.”
He continued, “I don’t understand how the fuck you get paid off of that shit. Somebody explain to me how you can get a billion streams and not get a million dollars?… That’s the main gripe with a lot of us artists is that we do major numbers… But it don’t add up to the money. Like where the fuck is the money?”
YouTube also caught a stray from Snoop after Jackson spoke about Gamma only receiving $15,000 in payout money from 500 million YouTube Shorts streams. “YouTube, y’all motherf*ckers need to break bread or fake dead!” he added.
Snoop Dogg’s Beef With Music Streaming Is Nothing New
With the help of Jackson’s Gamma, whom he has a long-term deal with, Snoop Dogg pulled the iconic record label’s music catalog back on streaming services.
Gamma also quietly helped get Death Row’s music on TikTok in February. That move came a year after Uncle Snoop acquired Death Row Records, hence why he has been handing out Death Row chains like they’re Halloween candy. He also pulled the label’s music off streaming services because he was not feeling the artist payout situation.
During a Drink Champs episode last year, he didn’t bite his tongue about streaming services being cheap with the dough.
“First thing I did was snatch all the music off those platforms traditionally known to people, because those platforms don’t pay,” he told the Drink Champs crew. “And those platforms get millions of streams, and nobody gets paid other than the record labels.”
“So what I wanted to do is snatch my music off, create a platform similar to Amazon, Netflix, Hulu. It’ll be a Death Row app, and the music, in the meantime, will live in the metaverse.”
We don’t know if that metaverse or app will hit, but more power to the Doggfather.
—
Photo: Jerod Harris / Getty
HipHopWired Featured Video
Source: Activision / Call of Duty / Kevin Durant
Kevin Durant calls himself Easy Money Sniper, and now we will see if he can live up to that in Call of Duty as the latest operator.
The writing was on the wall literally, and Call of Duty players saw it coming. Fans of the franchise figured it all out when they spotted Easy Money Sniper, the Phoenix Suns All-Star’s Instagram handle written on one of the buildings hinting he would be coming to the game.
Related Stories
That was confirmed when Call of Duty announced that the professional hooper, business mogul, and avid gamer would be the next available skin.
“@easymoneysniper is drafted to Call of Duty for his rookie season, ” Call of Duty writes in the caption of the IG post. “Kevin Durant will be available in a special, limited-time Store Bundle to be released during Season 03 Reloaded.”
Immediately after the announcement, the one question COD players had on their minds was if Durant would be in all of his 6’11 glory when he becomes available, making him a very obvious target in the game, or will he be shrunken down to the exact size of the other operators.
Durant’s addition to the game follows the Teenage Mutant Ninja Turtle villain, The Shredder’s operator skin.
[embedded content]
We can’t wait to see how Easy Money Sniper looks and feels in the game. We also wonder how good Durant is in Call of Duty. He didn’t make a good impression in the NBA 2K Players Only Tournament.
On the other hand, his teammate, Devin Booker, is an excellent COD player, so he might be jealous that KD got an operator skin before him.
We are sure Booker will eventually get into the game too.
8 photos
—
Photo: Activision