State Champ Radio

by DJ Frosty

Current track

Title

Artist

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm

Current show
blank

State Champ Radio Mix

12:00 am 12:00 pm


artificial intelligence

Page: 21

In 1994, at the dawn of the internet era, Rolling Stone asked Steve Jobs if he still had faith in technology. “It’s not a faith in technology,” he responded. “It’s faith in people.”

Today, at the dawn of the artificial intelligence era, we put our faith in people too.

It’s hard to think of an issue that has exploded onto the public scene with the furor of the debate over AI, which went from obscure technology journals to national morning shows practically overnight. This week, Congress is convening the first two of what will surely be many hearings on the issue, including one with OpenAI CEO Sam Altman and another with musician, voice actor and SAG-AFTRA National Board member Dan Navarro.

As members of the global Human Artistry Campaign, made up of more than 100 organizations that represent a united, worldwide coalition of the creative arts, we welcome this open and active debate. It’s gratifying to see policymakers, industry, and our own creative community asking tough questions up front. It’s a lot easier to chart a course in advance than to play catch up from afterward.

We don’t have long to get this right, either. The internet is already awash in unlicensed and unethical “style” and “soundalike” tools that rip off the writing, voice, likeness and style of professional artists and songwriters without authorization or permission. Powerful new engines like OpenAI’s ChatGPT and Jukebox, Google’s MusicLM and Microsoft’s AI-powered Bing have been trained on vast troves of musical compositions, lyrics, and sound recordings — as well as every other type of data and information available on the internet — without even the most basic transparency or disclosure, let alone consent from the creators whose work is being used. Songwriters, recording artists, and musicians today are literally being forced to compete against AI programs trained on copies of their own compositions and recordings.

RIAA Chairman/CEO Mitch Glazier

Othello Banaci

We strongly support AI that can be used to enhance art and stretch the potential of human creativity even further. Technology has always pushed art forward, and AI will be no different.

At the same time, however, human artistry must and will always remain at the core of genuine creation. The basis of creative expression is the sharing of lived experiences — an artist-to-audience/audience-to-artist connection that forms our culture and identity.

Without a rich supply of human-created works, there would be nothing on which to train AI in the first place. And if we don’t lay down a policy foundation now that respects, values and compensates the unique genius of human creators, we will end up in a cultural cul-de-sac, feeding AI-generated works back into the engines that produced them in a costly and ultimately empty race to the artistic bottom.

That policy foundation must start with the core value of consent. Use of copyrighted works to train or develop AI must be subject to free-market licensing and authorization from all rights holders. Creators and copyright owners must retain exclusive control over the ways their work is used. The moral invasion of AI engines that steal the core of a professional performer’s identity — the product of a lifetime’s hard work and dedication — without permission or pay cannot be tolerated.

David Israelite

Courtesy of NMPA

This will require AI developers to ensure copyrighted training inputs are approved and licensed, including those used by pre-trained AIs they employ. It means they need to keep thorough and transparent records of the creative works and likenesses used to train AI systems and how they were exploited. These obligations are nothing new, though — anyone who uses another creator’s work or a professional’s voice, image or likeness must already ensure they have the necessary rights and maintain the records to prove it.

Congress is right to bring in AI developers like Sam Altman to hear the technology community’s vision for the future of AI and explore the safeguards and guardrails the industry is relying on today. The issues around the rapid deployment of novel AI capabilities are numerous and profound: data privacy, deepfakes, bias and misinformation in training sets, job displacement and national security.

Creators will be watching and listening closely for concrete, meaningful commitments to the core principles of permission and fair market licensing that are necessary to sustain songwriters and recording artists and drive innovation.

We have already seen some of what AI can do. Now it falls to us to insist that it be done in ethical and lawful ways. Nothing short of our culture — and, over time, our very humanity — is at stake.

David Israelite is the President & CEO of the National Music Publishers’ Association. NMPA is the trade association representing American music publishers and their songwriting partners.

Mitch Glazier is chairman/CEO of the RIAA, the trade organization that supports and promotes the creative and financial vitality of the major recorded-music companies.

As a musician, educator, and author, I’ve spent the last few years examining AI’s challenges to the music ecosystem. But recently, after a comical misunderstanding on a U.S. podcast, I ended up playing devil’s advocate for the AI side of the AI and music equation. The experience was thought-provoking as I took on the role of an accidental AI evangelist, and I started to refocus on the question of, “Why are we fighting for ethical use of AI in music in the first place? What are the benefits, and are they worth the time and effort?” 

As we hurtle from the now-quaint AI chatbot ChatGPT, to the expected text-to-video and text-to-music capabilities of GPT 5 (rumoured to drop in December), to Microsoft’s acknowledgment that AGI is feasible (artificial general intelligence, or a sentient AI being, or Skynet to be alarmist), to viral AI-generated hits with vocals in the style of Drake & The Weeknd and Bad Bunny & Rihanna, it can be easy to focus on the doom and gloom of AI. However, doing so does us a disservice, as it shifts the conversation from “how do we harness AI’s benefits ethically” to “how do we stop AI from destroying the world?” There are many, with a cheeky chappie nod to Ian Dury, “Reasons to Be Cheerful” (or at least not fearful) about music and AI. Here are nine reasons why we need to embrace it with guardrails, rather than throwing the baby out with the bathwater.

Fun: Yes, damn the ethics – temporarily. Generative AI technologies, which ingest content and then create new content based on those inputs, are incredibly fun. They tease and capture our curiosity, drawing us in. We might tell our employers that we use text-to-image services like DALL-E and Stable Diffusion or chat and search bots like ChatGPT and Jasper to optimise workflow to stay ahead of the technological curve, but they are also so seductively entertaining. Elementary AI prohibition won’t work; our solutions must be at least as stimulating as our challenges. 

Time-saving: According to a survey by Water & Music, this is what music makers desire most. Many musicians spend countless hours managing social media accounts, wishing they could focus on music-making instead. AI solutions promise to grant that wish, allowing them to auto-generate text for social posts and announcements, and providing inspiration and potential starting points for new tracks, giving them the gift of time and helping them write, record, and release music more quickly. Let’s use automation wisely to free up musicians for their art and their economics.

Education: Despite dwindling funds for music education, technology offers new ways to make music accessible to everyone. Affordable AI tools can help students break through privileged barriers, providing access to personalised learning. I work with Ableton, which makes a variety of music production hardware and software. Successful initiatives such as the Ableton Push 1 campaign, which provided discounts to those who traded in their Push 1 midi controller and then refurbished and provided them for free to schools that needed them, demonstrate how digital tools can empower the economically marginalised, enable them to explore new musical styles and techniques, and nurture their passion for music.

Imperfect charm: AI’s imperfections and quirks make it endearing and relatable. AI’s unpredictable nature can lead to happy accidents and repurposing technologies for new musical uses. The fact that LMMs (large language models), which analyze huge swaths of text and then generate new text based on the patterns it learns, can be flawed retains a touch of human magic and humour in our interactions with them. Let’s enjoy this fleeting VHS fuzziness before it’s gone. 

Affordable: Setting aside the environmental costs for a moment, AI has become exponentially accessible. AI allows creators to produce incredible results with basic tools. Last July, I purchased an expensive GPU-filled MacBook with dreams of making mind-blowing AI musical creations, but by September, I was doing just that using only my old phone’s browser. This so-called “democratisation” of music production can level the playing field for musicians worldwide, allowing more people to pursue their passion. Can we get it to increase their income too?

Tech Stacking: Experimenting with new combinations of generative AI APIs (application programming interfaces) opens up a world of DIY creativity. APIs are essentially pre-made functionality that developers can slot into their code easily, allowing them to focus on innovation rather than spending their time creating generative AI applications from scratch. This collision of technologies can encourage collaboration between musicians and developers, fostering a dynamic and innovative environment that crucially must be aligned with new licensing and rights payment models. 

Elevated Chatter: As AI becomes more prevalent, the quality of conversations surrounding it has improved. People are starting to debate the legality of using copyrighted material to train AI, particularly in the music world, with a variety of strong arguments being made on behalf of human creators. In my research, I tried to address the complexities of AI in the music ecosystem, and now, I find these discussions happening everywhere, from John Oliver to barber shops. This elevated discourse can help us as a global industry make informed and necessarily swift decisions about AI’s role in our lives, better integrating its reasonably cheerful benefits and not being overwhelmed by its many chilling implications.

Inspiring the next generation: Introducing AI to young minds can be inspiring and terrifying. In my undergraduate module at Windmill Studios Dublin, I tasked students with inventing new music IP using existing cutting-edge technologies, with one rule of thumb: they could use a hologram but not bring someone back from the dead. Initially, I felt terrible about presenting such a potentially dystopian vision to young minds. But what happened next amazed me: all 40-odd students (from two classes) came up with outstanding commercial ideas. Their creativity and enthusiasm reminded me that the adage, “no one knows anything,” holds as true for music as it ever did.

Time to adapt: Perhaps the biggest reason to be cheerful at the moment, we still have enough time to address AI’s challenges. As Microsoft announces the “first sparks of AGI” and we face a saturated streaming market, we must work quickly together to ensure an equitable future for music makers. In my book, “Artificial Intelligence and Music Ecosystem,” my fellow contributors and I delve into the pressing issues surrounding AI, and now, more than ever, we need to take action to steer the course of music’s evolution. As AI continues to develop, it’s crucial for musicians, industry professionals, and policymakers to engage in open dialogue, collaborating to create a sustainable and equitable music ecosystem. Otherwise, what looks like Drake and sounds like Drake may not actually be Drake in the future.

The Human Artistry Campaign is the first step in this direction and interlinks with the concerns of MIT’s Max Tegmark’s “Pause Giant AI Experiments Open Letter” (which currently has 27,000+ signatures and called for a pause in the development of AI to give governments the chance to regulate it) for a big-vision picture. As we work together, we can ensure that AI serves as a tool for artistic growth and sustainability. But we must move fast and nurture things as AI’s growth is exponential. Do you remember back when ChatGPT was on the cutting edge of AI? Well, that was only five months ago, and we’re now talking about the possibility of sentience. That’s why I am working with some of the biggest companies in music and big tech to create AI:OK, a system to identify ethical stakeholders and help to create an equitable AI music ecosystem. If you are human and would like to help shape this future, it might be the most significant reason to be cheerful of all.

Dr. Martin Clancy, PhD is a musician, author, artist manager and AI expert. He is a founding member of Irish rock band In Tua Nua, chairs the IEEE Global AI Ethics Arts Committee, serves as an IRC Research Fellow at Trinity College Dublin, and authored the book Artificial Intelligence and Music Ecosystem. He also manages Irish singer Jack Lukeman and is a leading voice in advocating ethical use of AI in music.

What if we had the power to bring back the dead? As far as recordings are concerned, we might be getting pretty close.

The viral success of the so-called “Fake Drake” track “Heart on My Sleeve,” which apparently employed AI technology to create realistic renderings of vocals from Drake and The Weeknd without their knowledge, has raised the possibility that perhaps any voice can now be imitated by AI, even artists who died decades ago.

Last week, producer Timbaland did just that. “I always wanted to work with BIG, and I never got a chance to. Until today…” he said in an Instagram Reel, pressing play on an unreleased song clip that sounds like Notorious BIG, rapping on top of a Timbaland beat, despite the fact that the rapper was murdered in a drive-by shooting 25 years prior. (A representative for Timbaland did not respond to Billboard’s request for comment. A representative for Notorious BIG’s estate declined to comment).

But this is not the first time a deceased stars’ voice has been resurrected with AI. The HYBE-owned AI voice synthesis company Supertone recreated the voice of late-South Korean folk artist Kim Kwang-seok last year, and in November, Tencent’s Lingyin Engine made headlines for developing “synthetic voices in memory of legendary artists,” like Teresa Teng and Anita Mui. To see more even examples of this technology applied to late American singers, take a few minutes on TikTok, searching phrases like “Kurt Cobain AI cover” or “XXXTentacion AI voice.”

Some artists – like Grimes and Holly Herndon – have embraced the idea of this vocal recreation technology, finding innovative ways to grant fans access to their voices while maintaining some control through their own AI models, but other artists are showing signs that they will resist this, fearing that the technology could lead to confusion over which songs they actually recorded. There is also fear that fans will put words into artists’ mouths, making them voice phrases and opinions that they would never say IRL. Even Grimes admitted on Twitter there is the possibility that people will use her voice to say “rly rly toxic lyrics” or “nazi stuff” – and said she’d take those songs down.

In the case of artists like Notorious BIG or Kurt Cobain, who both passed away when the internet was still something you had to dial-up, it’s impossible to know where they might stand on this next-gen technology. Still, their voices are being resurrected through AI, and it seems these vocals are getting more realistic by the day.

It calls to mind the uncanny valley nature of the Tupac hologram which debuted at Coachella in 2012, or even the proliferation of posthumous albums in more recent years, which are especially common to see from artists who passed away suddenly at a young age, like Juice WRLD, Lil Peep, and Mac Miller.

Tyler, the Creator has voiced what many others have felt about the posthumous album trend. At an April 26 concert in Los Angeles, he noted that he’s written it into his will that he does not want any unreleased music put out after his death. “That’s f-cking gross,” he said. “Like, half-ass ideas and some random feature on it…like no.” It remains unclear if Tyler’s dying wishes would be honored when that time comes, however. Labels often own every song recorded during the term of their contract with an artist, so there is financial incentive for labels to release those unheard records.

Some who look at this optimistically liken the ability to render an artists’ voice onto a cover or original track as an emerging, novel form of fan engagement, similar to remixing, sampling, or even writing fan fiction. Similar to where this new technology seems to be headed, remixes and samples also both started as unsanctioned creations. Those reworkings were often less about making songs that would go toe-to-toe with the original artists’ catalog on the Billboard charts than it was about creativity and playfulness. Of course, there were plenty of legal issues that came along with the emergence of both remixing and sampling.

The legality of bringing artists’ voices back from the grave specifically is also still somewhat unclear. A celebrity’s voice may be covered by “right of publicity” laws which can protect them from having their voices commercially exploited without authorization. However, publicity rights post-mortem can be limited. “There’s no federal rights of publicity statute, just a hodgepodge of different state laws,” says Josh Love, partner at Reed Smith. He explains that depending on where the artist was domiciled at the time of their death, their estate may not possess any rights of publicity, but in states like California, there can be strong protection after death.

Another potential safeguard is the Lanham Act – which prohibits the use of any symbol or device that is likely to deceive consumers about the association, sponsorship, or approval of goods and services — though it may be less of a potent argument post-mortem. But most cases in which rights of publicity or the Lanham Act were used to protect a musician’s voice – like Tom Waits v. Frito Lay and Bette Midler v. Ford – were clear examples of voices being appropriated for commercial use. Creative works, like songs, are much more likely to be deemed a protected form of free speech.

Some believe this could be a particularly interesting new path for reviving older catalogs, especially when the artist is not alive to take part in any more promotion, for the estates and rights holders who control the artists’ likeness. As Zach Katz, president and COO of FaZe Clan and former president of BMG US, put it in a recent press release for voice mapping service Covers.ai: “AI will open a new, great opportunity for more legacy artists and catalogs to have their ‘Kate Bush’ or “Fleetwood Mac’ moment,” he said. “We are living in a remix culture and the whole fan-music movement is overdue to arrive in the industry.”

Though Covers.ai, created by start-up MAYK, was only just released to the public today, May 10, the company announced that it already amassed over 100,000 sign ups for the service leading up to its launch, proving that there is a strong appetite for this technology. With Covers.ai, users can upload original songs and map someone else’s voice on top of it, and the company says it is working to partner with the music business to license and pay for these voices. Its co-founder and CEO, Stefan Heinrich, says this idea is especially popular so far with Gen Z and Gen Alpha, “the product we’re building here is really made for the next generation, the one coming up.”

Between Supertone, Lingyin Engine, Covers.ai, and others competitors like Uberduck coming into the marketplace, it seems the popularization of these AI voice synthesizers is inevitable (albeit legally uncertain) but playing with dead artists’ voices adds another layer of moral complexity to the discussion: is this more akin to paying respects or grave robbing?

MAYK’s artificial intelligence-powered voice recreation tool officially launched to all users today (May 10).
Covers.ai lets users upload their own original songs and then try on other AI-voices on top of it, including the voices of Billboard Hot 100-charting talent. According to a company press release, Covers.ai’s tool topped over 100,000 sign-ups prior to its launch.

Its founder and CEO, Stefan Heinrich — an entrepreneur who previously worked in high-ranking positions for Cameo, TikTok, Musical.ly and YouTube — explains that, for now, most of the models available for users to work with are “community models.”

“This is open source,” he explains. “There are users that make these models with various celebrity voices out in the wild, and those can be uploaded and marked at ‘community models’ on our site. At the same time, we are working with artist teams to license the voices of specific talent so we can find a way to compensate them for their official use.”

Eventually, Heinrich says he also hopes to find a way to license song catalogs from rights holders so that users can mix and match tracks with various artists’ voices they find on the site. Through these licensing agreements, he hopes to find a way to create a new revenue stream for talent, but to date, these licenses have not yet been finalized.

MAYK is backed by notable music investors including Zach Katz (president/COO of FaZe Clan, former president of BMG US), Matt Pincus (co-founder and CEO of MUSIC), Jon Vlassopulos (CEO of Napster, former global head of music at Roblox), Mohnish Sani (principle, content acquisition, Amazon Music) and more.

The launch arrives as conversations around AI and vocal deepfakes are at a fever pitch. Just last month, an unknown songwriter called Ghostwriter went viral for creating a song called “Heart on My Sleeve” using supposed AI-renderings of Drake and The Weeknd’s voices without their knowledge. Soon after, Grimes responded to the news by launching her own AI voice model to let users freely use her voice to create music.

In just a few minutes of searching, it’s apparent that TikTok is already flooded with songs with AI-vocals, whether they are original songs employing the voices of famous talent, like “Heart on My Sleeve,” or mashing up one well-known song with the voice of a different artist.

This AI vocal technology raises legal questions, however.

Mimicking vocals may be a violation an artist’s so-called right of publicity – the legal right to control how your individual identity is commercially exploited by others. Past landmark cases — like Tom Waits v. Frito Lay and Bette Midler v. Ford Motor Company — have established that soundalikes of famous voices cannot be employed without their consent to sell products, but the precedent is less clear when it comes to creative expressions like songs, which are much more likely to be deemed a protected form of free speech.

Heinrich hopes that Covers.ai can help “democratize creativity” and make it far more “playful” in an effort to get music fans from the lean-back forms of music discovery, like listening to radio or a pre-programmed editorial playlist, to a more engaged, interactive experience. “I think what music is really changing right now,” he says, noting that Covers.ai’s earliest adopters are mostly Gen Z and Gen Alpha. “The product we’re building here is really made for the next generation, the one coming up.”

As the music industry grapples with the far-reaching implications of artificial intelligence, Warner Music Group CEO Robert Kyncl is being mindful of the opportunities it will create. “Framing it only as a threat is inaccurate,” he said on Tuesday (May 9) during the earnings call for the company’s second fiscal quarter ended March 31.
Kyncl’s tenure as chief business officer at YouTube informs his viewpoint on AI’s potential to contribute to the music industry’s growth. “When I arrived [at YouTube] in 2010, we were fighting many lawsuits around the world and were generating low tens of millions of dollars from [user-generated content],” he continued. “We turned that liability into a billion-dollar opportunity in a handful of years and multibillion-dollar revenue stream over time. In 2022, YouTube announced that it paid out over $2 billion from UGC to music rightsholders alone and far more across all content industries.”

Not that AI doesn’t pose challenges for owners of intellectual property. A wave of high-profile AI-generated songs — such as the “fake Drake”/The Weeknd track, “Heart on My Sleeve,” by an anonymous producer under the name Ghostwriter — has revealed how off-the-shelf generative AI technologies can easily replicate the sound and style of popular artists without their consent.

“Our first priority is to vigorously enforce our copyrights and our rights in name, image, likeness, and voice, to defend the originality of our artists and songwriters,” said Kyncl, echoing comments by Universal Music Group CEO Lucian Grainge in a letter sent to Spotify and other music streaming platforms in March. In that letter, Grainge said UMG “would not hesitate to take steps to protect our rights and those of our artists” against AI companies that use its intellectual property to “train” their AI.

“It is crucial that any AI generative platform discloses what their AI is trained on and this must happen all around the world,” Kyncl said on Tuesday. He pointed to the EU Artificial Intelligence Act — a proposed law that would establish government oversight and transparency requirements for AI systems — and efforts by U.S. Sen. Chuck Schumer in April to build “a flexible and resilient AI policy framework” to impose guardrails while allowing for innovation.

“I can promise you that whenever and wherever there is a legislative initiative on AI, we will be there in force to ensure that protection of intellectual property is high on the agenda,” Kyncl continued.

Kyncl went on to note that technological problems also require technological solutions. AI companies and distribution platforms can manage the proliferation of AI music by building new technologies for “identifying and tracking of content on consumption platforms that can appropriately identify copyright and remunerate copyright holders,” he continued.

Again, Kyncl’s employment at YouTube comes into play here. Prior to his arrival, the platform built a proprietary digital fingerprinting system, Content ID, to manage and monetize copyrighted material. In fact, one of Kyncl’s first hires as CEO of WMG, president of technology Ariel Bardin, is a former YouTube vp of product management who oversaw Content ID.

Labels are also attempting to rein in AI content by adopting “user-centric” royalty payment models that reward authentic, human-created recordings over mass-produced imitations. During UMG’s first quarter earnings call on April 26, Grainge said that “with the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of noise … eliminating unauthorized, unwanted and infringing content entirely.” WMG adopted user-centric (i.e. “fan-powered”) royalties on SoundCloud in 2022.

Britain’s competition watchdog said Thursday that it’s opening a review of the artificial intelligence market, focusing on the technology underpinning chatbots like ChatGPT.

The Competition Markets Authority said it will look into the opportunities and risks of AI as well as the competition rules and consumer protections that may be needed.

AI’s ability to mimic human behavior has dazzled users but also drawn attention from regulators and experts around the world concerned about its dangers as its use mushrooms — affecting jobs, copyright, education, privacy and many other parts of life.

The CEOs of Google, Microsoft and ChatGPT-maker OpenAI will meet Thursday with U.S. Vice President Kamala Harris for talks on how to ease the risks of their technology. And European Union negotiators are putting the finishing touches on sweeping new AI rules.

The U.K. watchdog said the goal of the review is to help guide the development of AI to ensure open and competitive markets that don’t end up being unfairly dominated by a few big players.

Artificial intelligence “has the potential to transform the way businesses compete as well as drive substantial economic growth,” CMA Chief Executive Sarah Cardell said. “It’s crucial that the potential benefits of this transformative technology are readily accessible to U.K. businesses and consumers while people remain protected from issues like false or misleading information.”

The authority will examine competition and barriers to entry in the development of foundation models. Also known as large language models, they’re a sub-category of general purpose AI that includes systems like ChatGPT.

The algorithms these models use are trained on vast pools of online information like blog posts and digital books to generate text and images that resemble human work, but they still face limitations including a tendency to fabricate information.

Right now, our artificial intelligence future sure seems to look a lot like… Wes Anderson movies! Over the past week, various AI programs have used the director’s quirky style to frame TikTok posts, rethink the looks of movies and even, more recently, make a trailer for a fictitious reboot of Star Wars. The future may be creepy, but at least it looks color-saturated and carefully composed.

The fake, fan-made Star Wars trailer, appropriately subtitled “The Galactic Menagerie,” is great fun, and its viral success shows both the strengths and current limitations of AI technology. Anderson’s distinctive visual style is an important part of his art, and the ostensible mission to “steal the Emperor’s artifact” sounds straight out of Star Wars. But the original Star Wars captured the imaginations of so many fans because it suggested a future that had some sand in its gears – the interstellar battle station had a trash compactor, and the spaceport cantina had a live band (and, one assumes, a public performance license).

Right now, at least, AI can’t seem to get past the surface.

“Heart on My Sleeve,” the so-called “Fake Drake” track apparently made with an artificial intelligence-generated version of Drake’s vocals, also sounds perfectly polished precisely in-tune and on-tempo. So do most modern pop songs, which tend to be pitch-corrected and sonically tweaked. (Most modern pop isn’t recorded live in a studio so much as assembled on a computer, so why shouldn’t it sound that way?) It’s hard to tell exactly why this style became so popular – the ease of smoothing over mistakes, the temptation of technical perfection, the sheer availability of samples and beats – but it’s what the mass streaming audience seems to want.

It’s also the kind of music that AI can most easily imitate. AI can already create pitch-perfect vocals, right-on-the-beat drumming, the kind of airless perfection of the Wes Anderson Star Wars trailer. It’s harder to learn a particular creator’s style – the phrasing and delivery that set singers apart as much as their voices do. So far, many of the songs online that have AI-generated voices seem to have put it on top of the old singer’s words, although most pop music is less about technical excellence than style of delivery. And quirks of timing and emphasis are even harder to imitate.

Most big new pop stars are short on quirks, but they might do well to develop them. Whatever laws and agreements eventually regulate AI – and it pains me to point out that the key word there is eventually – artists will still end up competing with algorithms. And since algorithms don’t need to eat or sleep, creators are going to have to do something that they can’t. One of those things, at least for now, is embracing a certain amount of imperfection. Computers will catch up, of course – if they can avoid mistakes, they can certainly learn to make a few – but that could take some time.

Until relatively recently, most great artists had quirks: Led Zeppelin drummer John Bonham played a bit behind the beat, Snoop Dogg started drawling out verses at a time when most rappers fired them off, and Willie Nelson has a sense of phrasing that owes more to jazz than rock. (Nelson’s timing is going to be hard for algorithms to imitate until they start smoking weed.) In most cases, these quirks are strengths – Bonham’s drumming made Zeppelin swing. But many producers came to see these kinds of imperfections as relics of an age when correcting them was difficult and the sound of pop changed so much that they now stick out like sore thumbs.

I don’t mean to romanticize the past. And newer artists have quirks, too – they just tend to smooth them over with studio software. But this kind of artificial perfection is easier to imitate. So, I wonder if the rise of AI – not the parodies we’re seeing so far, but the flood of computer-created pop that’s coming – will push musicians to embrace a rougher, messier aesthetic.

Most artists wouldn’t admit to this, of course – acknowledging commercial pressure is usually considered uncool. But big-picture shifts in the market have always shaped the sound of pop music. Consider how many artists created 35-to-45-minute albums in the ’60s and ’70s, and then 60-to-75-minute albums in the ’90s. Were they almost twice as inspired, or did the amount of music that fit on a CD – and the additional mechanical royalties they could make if they had songwriting credit – drive them to create more? These days, presumably also for economic reasons, songs are getting shorter and albums are getting longer.

It will be interesting to see if they also get a bit rougher, too. In Star Wars, at least, the future isn’t all about a sparkling surface.

For the Record is a regular column from deputy editorial director Robert Levine analyzing news and trends in the music industry. Find more here.

Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.

But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.

Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.

There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”

Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.

Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.

“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”

Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”

Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.

Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.

At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.

“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.

“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.

A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.

“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”

Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.

Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.

The Guild of Music Supervisors’ ninth annual State of Music in Media conference is slated to take place on Saturday, Aug. 19 at The Los Angeles Film School in Hollywood, Calif.

The day-long conference, a collaboration with The Los Angeles Film School, will include panels exploring topics such as the emerging role of AI and other new technologies in the music industry, celebrating the 50th anniversary of hip-hop, and discussing the business and art of music supervision across all crafts.

“The Guild of Music Supervisors is thrilled to be hosting our ninth annual education conference once again at The Los Angeles Film school,” said Guild president Joel C. High in a statement. “It is always our ambition to bring the highest level of panels and discussion as a service to our members and friends. This year there are some critical topics that need to be brought to the community and we cannot wait for all to be involved.” 

There are both in-person and virtual ticket options. The in-person event includes networking, a happy hour, educational panels, live musical performances and one-on-one mentoring sessions for aspiring music supervisors.

Members of the Guild of Music Supervisors and Friends of the Guild will receive a discount on their ticket purchases. Tickets are available to the public at full price and come with a complimentary one-year subscription as a Friend of the Guild. 

Below is a pricing grid on the various ticket options. To learn more, visit https://www.gmsmediaconference.com. To purchase tickets, visit the guild’s ticketing page here.

Universal Music Group chairman/CEO Lucian Grainge took aim at artificial intelligence again on Wednesday (April 26), this time blaming AI for the “oversupply” of “bad” content on streaming platforms and pointing to user-centric payment models as the answer.

AI tools have exploded in popularity in recent months, and Grainge has been an outspoken critic of generative AI being used to mimic copyrighted works, as with the song “Heart on My Sleeve,” which used AI to generate vocals from UMG artists Drake and The Weeknd.

In fervent comments Grainge made during a call discussing UMG’s earnings Wednesday, the executive said AI significantly contributes to a glut of “poor-quality” content on streaming platforms, muddies search experiences for fans looking for their favorite artists and generally has “virtually no consumer appeal.”

“Any way you look at it, this oversupply, whether or not AI-created is, simply, bad. Bad for artists.  Bad for fans. And bad for the platforms themselves,” Grainge said.

The head of the world’s largest music company specifically called out the role of generative AI platforms, which are “trained” to produce new creations after being fed vast quantities of existing works known as “inputs.” In the case of AI music platforms, that process involves huge numbers of songs, which many across the music industry argue infringes on artists’ and labels’ copyrights.

Grainge argued that “the flood of unwanted content” generated by AI could be reduced by adopting new payment models from streaming platforms. UMG is currently exploring “artist-centric” models with Tidal and Deezer, while SoundCloud and Warner Music Group also announced a partnership on so-called user-centric royalties last year.

“With the right incentive structures in place, platforms can focus on rewarding and enhancing the artist-fan relationship and, at the same time, elevate the user experience on their platforms, by reducing the sea of ‘noise’ … eliminating unauthorized, unwanted, and infringing content entirely,” Grainge said on Wednesday.

While UMG continues exploring alternative streaming payment models with partners Tidal, Deezer and others on what form alternative streaming payment models should take, an analyst on Wednesday’s call asked Grainge if, in the meantime, the company would ever consider licensing songs to an AI platform.

“We are open to licensing … but we have to respect our artist and the integrity of their work,” Grainge said. “We should be the hostess with the mostest. We’re open for business with businesses that are legitimate and (interested in) partnership for growth.”